Dec 09 09:42:58 localhost kernel: Linux version 5.14.0-648.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025
Dec 09 09:42:58 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 09 09:42:58 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 09 09:42:58 localhost kernel: BIOS-provided physical RAM map:
Dec 09 09:42:58 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 09 09:42:58 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 09 09:42:58 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 09 09:42:58 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 09 09:42:58 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 09 09:42:58 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 09 09:42:58 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 09 09:42:58 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 09 09:42:58 localhost kernel: NX (Execute Disable) protection: active
Dec 09 09:42:58 localhost kernel: APIC: Static calls initialized
Dec 09 09:42:58 localhost kernel: SMBIOS 2.8 present.
Dec 09 09:42:58 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 09 09:42:58 localhost kernel: Hypervisor detected: KVM
Dec 09 09:42:58 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 09 09:42:58 localhost kernel: kvm-clock: using sched offset of 3836543039 cycles
Dec 09 09:42:58 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 09 09:42:58 localhost kernel: tsc: Detected 2799.998 MHz processor
Dec 09 09:42:58 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 09 09:42:58 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 09 09:42:58 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 09 09:42:58 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 09 09:42:58 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 09 09:42:58 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 09 09:42:58 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 09 09:42:58 localhost kernel: Using GB pages for direct mapping
Dec 09 09:42:58 localhost kernel: RAMDISK: [mem 0x2e955000-0x334a2fff]
Dec 09 09:42:58 localhost kernel: ACPI: Early table checksum verification disabled
Dec 09 09:42:58 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 09 09:42:58 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 09 09:42:58 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 09 09:42:58 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 09 09:42:58 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 09 09:42:58 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 09 09:42:58 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 09 09:42:58 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 09 09:42:58 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 09 09:42:58 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 09 09:42:58 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 09 09:42:58 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 09 09:42:58 localhost kernel: No NUMA configuration found
Dec 09 09:42:58 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 09 09:42:58 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec 09 09:42:58 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 09 09:42:58 localhost kernel: Zone ranges:
Dec 09 09:42:58 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 09 09:42:58 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 09 09:42:58 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 09 09:42:58 localhost kernel:   Device   empty
Dec 09 09:42:58 localhost kernel: Movable zone start for each node
Dec 09 09:42:58 localhost kernel: Early memory node ranges
Dec 09 09:42:58 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 09 09:42:58 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 09 09:42:58 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 09 09:42:58 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 09 09:42:58 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 09 09:42:58 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 09 09:42:58 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 09 09:42:58 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 09 09:42:58 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 09 09:42:58 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 09 09:42:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 09 09:42:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 09 09:42:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 09 09:42:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 09 09:42:58 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 09 09:42:58 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 09 09:42:58 localhost kernel: TSC deadline timer available
Dec 09 09:42:58 localhost kernel: CPU topo: Max. logical packages:   8
Dec 09 09:42:58 localhost kernel: CPU topo: Max. logical dies:       8
Dec 09 09:42:58 localhost kernel: CPU topo: Max. dies per package:   1
Dec 09 09:42:58 localhost kernel: CPU topo: Max. threads per core:   1
Dec 09 09:42:58 localhost kernel: CPU topo: Num. cores per package:     1
Dec 09 09:42:58 localhost kernel: CPU topo: Num. threads per package:   1
Dec 09 09:42:58 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 09 09:42:58 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 09 09:42:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 09 09:42:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 09 09:42:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 09 09:42:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 09 09:42:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 09 09:42:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 09 09:42:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 09 09:42:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 09 09:42:58 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 09 09:42:58 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 09 09:42:58 localhost kernel: Booting paravirtualized kernel on KVM
Dec 09 09:42:58 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 09 09:42:58 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 09 09:42:58 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 09 09:42:58 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 09 09:42:58 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 09 09:42:58 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 09 09:42:58 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 09 09:42:58 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64", will be passed to user space.
Dec 09 09:42:58 localhost kernel: random: crng init done
Dec 09 09:42:58 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 09 09:42:58 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 09 09:42:58 localhost kernel: Fallback order for Node 0: 0 
Dec 09 09:42:58 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 09 09:42:58 localhost kernel: Policy zone: Normal
Dec 09 09:42:58 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 09 09:42:58 localhost kernel: software IO TLB: area num 8.
Dec 09 09:42:58 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 09 09:42:58 localhost kernel: ftrace: allocating 49357 entries in 193 pages
Dec 09 09:42:58 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 09 09:42:58 localhost kernel: Dynamic Preempt: voluntary
Dec 09 09:42:58 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 09 09:42:58 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 09 09:42:58 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 09 09:42:58 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 09 09:42:58 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 09 09:42:58 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 09 09:42:58 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 09 09:42:58 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 09 09:42:58 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 09 09:42:58 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 09 09:42:58 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 09 09:42:58 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 09 09:42:58 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 09 09:42:58 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 09 09:42:58 localhost kernel: Console: colour VGA+ 80x25
Dec 09 09:42:58 localhost kernel: printk: console [ttyS0] enabled
Dec 09 09:42:58 localhost kernel: ACPI: Core revision 20230331
Dec 09 09:42:58 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 09 09:42:58 localhost kernel: x2apic enabled
Dec 09 09:42:58 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 09 09:42:58 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 09 09:42:58 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec 09 09:42:58 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 09 09:42:58 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 09 09:42:58 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 09 09:42:58 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 09 09:42:58 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 09 09:42:58 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 09 09:42:58 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 09 09:42:58 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 09 09:42:58 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 09 09:42:58 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 09 09:42:58 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 09 09:42:58 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 09 09:42:58 localhost kernel: x86/bugs: return thunk changed
Dec 09 09:42:58 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 09 09:42:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 09 09:42:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 09 09:42:58 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 09 09:42:58 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 09 09:42:58 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 09 09:42:58 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 09 09:42:58 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 09 09:42:58 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 09 09:42:58 localhost kernel: landlock: Up and running.
Dec 09 09:42:58 localhost kernel: Yama: becoming mindful.
Dec 09 09:42:58 localhost kernel: SELinux:  Initializing.
Dec 09 09:42:58 localhost kernel: LSM support for eBPF active
Dec 09 09:42:58 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 09 09:42:58 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 09 09:42:58 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 09 09:42:58 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 09 09:42:58 localhost kernel: ... version:                0
Dec 09 09:42:58 localhost kernel: ... bit width:              48
Dec 09 09:42:58 localhost kernel: ... generic registers:      6
Dec 09 09:42:58 localhost kernel: ... value mask:             0000ffffffffffff
Dec 09 09:42:58 localhost kernel: ... max period:             00007fffffffffff
Dec 09 09:42:58 localhost kernel: ... fixed-purpose events:   0
Dec 09 09:42:58 localhost kernel: ... event mask:             000000000000003f
Dec 09 09:42:58 localhost kernel: signal: max sigframe size: 1776
Dec 09 09:42:58 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 09 09:42:58 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 09 09:42:58 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 09 09:42:58 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 09 09:42:58 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 09 09:42:58 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 09 09:42:58 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec 09 09:42:58 localhost kernel: node 0 deferred pages initialised in 33ms
Dec 09 09:42:58 localhost kernel: Memory: 7774652K/8388068K available (16384K kernel code, 5795K rwdata, 13916K rodata, 4192K init, 7164K bss, 607516K reserved, 0K cma-reserved)
Dec 09 09:42:58 localhost kernel: devtmpfs: initialized
Dec 09 09:42:58 localhost kernel: x86/mm: Memory block size: 128MB
Dec 09 09:42:58 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 09 09:42:58 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 09 09:42:58 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 09 09:42:58 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 09 09:42:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 09 09:42:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 09 09:42:58 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 09 09:42:58 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 09 09:42:58 localhost kernel: audit: type=2000 audit(1765273375.928:1): state=initialized audit_enabled=0 res=1
Dec 09 09:42:58 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 09 09:42:58 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 09 09:42:58 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 09 09:42:58 localhost kernel: cpuidle: using governor menu
Dec 09 09:42:58 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 09 09:42:58 localhost kernel: PCI: Using configuration type 1 for base access
Dec 09 09:42:58 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 09 09:42:58 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 09 09:42:58 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 09 09:42:58 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 09 09:42:58 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 09 09:42:58 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 09 09:42:58 localhost kernel: Demotion targets for Node 0: null
Dec 09 09:42:58 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 09 09:42:58 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 09 09:42:58 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 09 09:42:58 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 09 09:42:58 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 09 09:42:58 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 09 09:42:58 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 09 09:42:58 localhost kernel: ACPI: Interpreter enabled
Dec 09 09:42:58 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 09 09:42:58 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 09 09:42:58 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 09 09:42:58 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 09 09:42:58 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 09 09:42:58 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 09 09:42:58 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [3] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [4] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [5] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [6] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [7] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [8] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [9] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [10] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [11] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [12] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [13] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [14] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [15] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [16] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [17] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [18] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [19] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [20] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [21] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [22] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [23] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [24] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [25] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [26] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [27] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [28] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [29] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [30] registered
Dec 09 09:42:58 localhost kernel: acpiphp: Slot [31] registered
Dec 09 09:42:58 localhost kernel: PCI host bridge to bus 0000:00
Dec 09 09:42:58 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 09 09:42:58 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 09 09:42:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 09 09:42:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 09 09:42:58 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 09 09:42:58 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 09 09:42:58 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 09 09:42:58 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 09 09:42:58 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 09 09:42:58 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 09 09:42:58 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 09 09:42:58 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 09 09:42:58 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 09 09:42:58 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 09 09:42:58 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 09 09:42:58 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 09 09:42:58 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 09 09:42:58 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 09 09:42:58 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 09 09:42:58 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 09 09:42:58 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 09 09:42:58 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 09 09:42:58 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 09 09:42:58 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 09 09:42:58 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 09 09:42:58 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 09 09:42:58 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 09 09:42:58 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 09 09:42:58 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 09 09:42:58 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 09 09:42:58 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 09 09:42:58 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 09 09:42:58 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 09 09:42:58 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 09 09:42:58 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 09 09:42:58 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 09 09:42:58 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 09 09:42:58 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 09 09:42:58 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 09 09:42:58 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 09 09:42:58 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 09 09:42:58 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 09 09:42:58 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 09 09:42:58 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 09 09:42:58 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 09 09:42:58 localhost kernel: iommu: Default domain type: Translated
Dec 09 09:42:58 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 09 09:42:58 localhost kernel: SCSI subsystem initialized
Dec 09 09:42:58 localhost kernel: ACPI: bus type USB registered
Dec 09 09:42:58 localhost kernel: usbcore: registered new interface driver usbfs
Dec 09 09:42:58 localhost kernel: usbcore: registered new interface driver hub
Dec 09 09:42:58 localhost kernel: usbcore: registered new device driver usb
Dec 09 09:42:58 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 09 09:42:58 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 09 09:42:58 localhost kernel: PTP clock support registered
Dec 09 09:42:58 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 09 09:42:58 localhost kernel: NetLabel: Initializing
Dec 09 09:42:58 localhost kernel: NetLabel:  domain hash size = 128
Dec 09 09:42:58 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 09 09:42:58 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 09 09:42:58 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 09 09:42:58 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 09 09:42:58 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 09 09:42:58 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 09 09:42:58 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 09 09:42:58 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 09 09:42:58 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 09 09:42:58 localhost kernel: vgaarb: loaded
Dec 09 09:42:58 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 09 09:42:58 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 09 09:42:58 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 09 09:42:58 localhost kernel: pnp: PnP ACPI init
Dec 09 09:42:58 localhost kernel: pnp 00:03: [dma 2]
Dec 09 09:42:58 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 09 09:42:58 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 09 09:42:58 localhost kernel: NET: Registered PF_INET protocol family
Dec 09 09:42:58 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 09 09:42:58 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 09 09:42:58 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 09 09:42:58 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 09 09:42:58 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 09 09:42:58 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 09 09:42:58 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 09 09:42:58 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 09 09:42:58 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 09 09:42:58 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 09 09:42:58 localhost kernel: NET: Registered PF_XDP protocol family
Dec 09 09:42:58 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 09 09:42:58 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 09 09:42:58 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 09 09:42:58 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 09 09:42:58 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 09 09:42:58 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 09 09:42:58 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 09 09:42:58 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 09 09:42:58 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 73280 usecs
Dec 09 09:42:58 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 09 09:42:58 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 09 09:42:58 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 09 09:42:58 localhost kernel: ACPI: bus type thunderbolt registered
Dec 09 09:42:58 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 09 09:42:58 localhost kernel: Initialise system trusted keyrings
Dec 09 09:42:58 localhost kernel: Key type blacklist registered
Dec 09 09:42:58 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 09 09:42:58 localhost kernel: zbud: loaded
Dec 09 09:42:58 localhost kernel: integrity: Platform Keyring initialized
Dec 09 09:42:58 localhost kernel: integrity: Machine keyring initialized
Dec 09 09:42:58 localhost kernel: Freeing initrd memory: 77112K
Dec 09 09:42:58 localhost kernel: NET: Registered PF_ALG protocol family
Dec 09 09:42:58 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 09 09:42:58 localhost kernel: Key type asymmetric registered
Dec 09 09:42:58 localhost kernel: Asymmetric key parser 'x509' registered
Dec 09 09:42:58 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 09 09:42:58 localhost kernel: io scheduler mq-deadline registered
Dec 09 09:42:58 localhost kernel: io scheduler kyber registered
Dec 09 09:42:58 localhost kernel: io scheduler bfq registered
Dec 09 09:42:58 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 09 09:42:58 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 09 09:42:58 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 09 09:42:58 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 09 09:42:58 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 09 09:42:58 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 09 09:42:58 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 09 09:42:58 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 09 09:42:58 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 09 09:42:58 localhost kernel: Non-volatile memory driver v1.3
Dec 09 09:42:58 localhost kernel: rdac: device handler registered
Dec 09 09:42:58 localhost kernel: hp_sw: device handler registered
Dec 09 09:42:58 localhost kernel: emc: device handler registered
Dec 09 09:42:58 localhost kernel: alua: device handler registered
Dec 09 09:42:58 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 09 09:42:58 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 09 09:42:58 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 09 09:42:58 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 09 09:42:58 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 09 09:42:58 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 09 09:42:58 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 09 09:42:58 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-648.el9.x86_64 uhci_hcd
Dec 09 09:42:58 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 09 09:42:58 localhost kernel: hub 1-0:1.0: USB hub found
Dec 09 09:42:58 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 09 09:42:58 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 09 09:42:58 localhost kernel: usbserial: USB Serial support registered for generic
Dec 09 09:42:58 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 09 09:42:58 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 09 09:42:58 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 09 09:42:58 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 09 09:42:58 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 09 09:42:58 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 09 09:42:58 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-09T09:42:57 UTC (1765273377)
Dec 09 09:42:58 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 09 09:42:58 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 09 09:42:58 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 09 09:42:58 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 09 09:42:58 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 09 09:42:58 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 09 09:42:58 localhost kernel: usbcore: registered new interface driver usbhid
Dec 09 09:42:58 localhost kernel: usbhid: USB HID core driver
Dec 09 09:42:58 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 09 09:42:58 localhost kernel: Initializing XFRM netlink socket
Dec 09 09:42:58 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 09 09:42:58 localhost kernel: Segment Routing with IPv6
Dec 09 09:42:58 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 09 09:42:58 localhost kernel: mpls_gso: MPLS GSO support
Dec 09 09:42:58 localhost kernel: IPI shorthand broadcast: enabled
Dec 09 09:42:58 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 09 09:42:58 localhost kernel: AES CTR mode by8 optimization enabled
Dec 09 09:42:58 localhost kernel: sched_clock: Marking stable (2391005545, 153778355)->(2859927131, -315143231)
Dec 09 09:42:58 localhost kernel: registered taskstats version 1
Dec 09 09:42:58 localhost kernel: Loading compiled-in X.509 certificates
Dec 09 09:42:58 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 09 09:42:58 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 09 09:42:58 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 09 09:42:58 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 09 09:42:58 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 09 09:42:58 localhost kernel: Demotion targets for Node 0: null
Dec 09 09:42:58 localhost kernel: page_owner is disabled
Dec 09 09:42:58 localhost kernel: Key type .fscrypt registered
Dec 09 09:42:58 localhost kernel: Key type fscrypt-provisioning registered
Dec 09 09:42:58 localhost kernel: Key type big_key registered
Dec 09 09:42:58 localhost kernel: Key type encrypted registered
Dec 09 09:42:58 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 09 09:42:58 localhost kernel: Loading compiled-in module X.509 certificates
Dec 09 09:42:58 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 09 09:42:58 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 09 09:42:58 localhost kernel: ima: No architecture policies found
Dec 09 09:42:58 localhost kernel: evm: Initialising EVM extended attributes:
Dec 09 09:42:58 localhost kernel: evm: security.selinux
Dec 09 09:42:58 localhost kernel: evm: security.SMACK64 (disabled)
Dec 09 09:42:58 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 09 09:42:58 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 09 09:42:58 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 09 09:42:58 localhost kernel: evm: security.apparmor (disabled)
Dec 09 09:42:58 localhost kernel: evm: security.ima
Dec 09 09:42:58 localhost kernel: evm: security.capability
Dec 09 09:42:58 localhost kernel: evm: HMAC attrs: 0x1
Dec 09 09:42:58 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 09 09:42:58 localhost kernel: Running certificate verification RSA selftest
Dec 09 09:42:58 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 09 09:42:58 localhost kernel: Running certificate verification ECDSA selftest
Dec 09 09:42:58 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 09 09:42:58 localhost kernel: clk: Disabling unused clocks
Dec 09 09:42:58 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 09 09:42:58 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec 09 09:42:58 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 09 09:42:58 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Dec 09 09:42:58 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 09 09:42:58 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 09 09:42:58 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 09 09:42:58 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 09 09:42:58 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 09 09:42:58 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 09 09:42:58 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 09 09:42:58 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 09 09:42:58 localhost kernel: Run /init as init process
Dec 09 09:42:58 localhost kernel:   with arguments:
Dec 09 09:42:58 localhost kernel:     /init
Dec 09 09:42:58 localhost kernel:   with environment:
Dec 09 09:42:58 localhost kernel:     HOME=/
Dec 09 09:42:58 localhost kernel:     TERM=linux
Dec 09 09:42:58 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64
Dec 09 09:42:58 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 09 09:42:58 localhost systemd[1]: Detected virtualization kvm.
Dec 09 09:42:58 localhost systemd[1]: Detected architecture x86-64.
Dec 09 09:42:58 localhost systemd[1]: Running in initrd.
Dec 09 09:42:58 localhost systemd[1]: No hostname configured, using default hostname.
Dec 09 09:42:58 localhost systemd[1]: Hostname set to <localhost>.
Dec 09 09:42:58 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 09 09:42:58 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 09 09:42:58 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 09 09:42:58 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 09 09:42:58 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 09 09:42:58 localhost systemd[1]: Reached target Local File Systems.
Dec 09 09:42:58 localhost systemd[1]: Reached target Path Units.
Dec 09 09:42:58 localhost systemd[1]: Reached target Slice Units.
Dec 09 09:42:58 localhost systemd[1]: Reached target Swaps.
Dec 09 09:42:58 localhost systemd[1]: Reached target Timer Units.
Dec 09 09:42:58 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 09 09:42:58 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 09 09:42:58 localhost systemd[1]: Listening on Journal Socket.
Dec 09 09:42:58 localhost systemd[1]: Listening on udev Control Socket.
Dec 09 09:42:58 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 09 09:42:58 localhost systemd[1]: Reached target Socket Units.
Dec 09 09:42:58 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 09 09:42:58 localhost systemd[1]: Starting Journal Service...
Dec 09 09:42:58 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 09 09:42:58 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 09 09:42:58 localhost systemd[1]: Starting Create System Users...
Dec 09 09:42:58 localhost systemd[1]: Starting Setup Virtual Console...
Dec 09 09:42:58 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 09 09:42:58 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 09 09:42:58 localhost systemd[1]: Finished Create System Users.
Dec 09 09:42:58 localhost systemd-journald[306]: Journal started
Dec 09 09:42:58 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/6aaf51230bdb461d92bbb40c4bea282b) is 8.0M, max 153.6M, 145.6M free.
Dec 09 09:42:58 localhost systemd-sysusers[310]: Creating group 'users' with GID 100.
Dec 09 09:42:58 localhost systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Dec 09 09:42:58 localhost systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 09 09:42:58 localhost systemd[1]: Started Journal Service.
Dec 09 09:42:58 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 09 09:42:58 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 09 09:42:58 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 09 09:42:58 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 09 09:42:58 localhost systemd[1]: Finished Setup Virtual Console.
Dec 09 09:42:58 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 09 09:42:58 localhost systemd[1]: Starting dracut cmdline hook...
Dec 09 09:42:58 localhost dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Dec 09 09:42:58 localhost dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 09 09:42:58 localhost systemd[1]: Finished dracut cmdline hook.
Dec 09 09:42:58 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 09 09:42:58 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 09 09:42:58 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 09 09:42:58 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 09 09:42:58 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 09 09:42:58 localhost kernel: RPC: Registered udp transport module.
Dec 09 09:42:58 localhost kernel: RPC: Registered tcp transport module.
Dec 09 09:42:58 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 09 09:42:58 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 09 09:42:59 localhost rpc.statd[443]: Version 2.5.4 starting
Dec 09 09:42:59 localhost rpc.statd[443]: Initializing NSM state
Dec 09 09:42:59 localhost rpc.idmapd[448]: Setting log level to 0
Dec 09 09:42:59 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 09 09:42:59 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 09 09:42:59 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Dec 09 09:42:59 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 09 09:42:59 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 09 09:42:59 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 09 09:42:59 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 09 09:42:59 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 09 09:42:59 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 09 09:42:59 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 09 09:42:59 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 09 09:42:59 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 09 09:42:59 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 09 09:42:59 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 09 09:42:59 localhost systemd[1]: Reached target Network.
Dec 09 09:42:59 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 09 09:42:59 localhost systemd[1]: Starting dracut initqueue hook...
Dec 09 09:42:59 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 09 09:42:59 localhost systemd[1]: Reached target System Initialization.
Dec 09 09:42:59 localhost systemd[1]: Reached target Basic System.
Dec 09 09:42:59 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 09 09:42:59 localhost kernel: libata version 3.00 loaded.
Dec 09 09:42:59 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 09 09:42:59 localhost systemd-udevd[474]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 09:42:59 localhost kernel: scsi host0: ata_piix
Dec 09 09:42:59 localhost kernel: scsi host1: ata_piix
Dec 09 09:42:59 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 09 09:42:59 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 09 09:42:59 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 09 09:42:59 localhost kernel:  vda: vda1
Dec 09 09:42:59 localhost kernel: ata1: found unknown device (class 0)
Dec 09 09:42:59 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 09 09:42:59 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 09 09:42:59 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 09 09:42:59 localhost systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 09 09:42:59 localhost systemd[1]: Reached target Initrd Root Device.
Dec 09 09:42:59 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 09 09:42:59 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 09 09:42:59 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 09 09:42:59 localhost systemd[1]: Finished dracut initqueue hook.
Dec 09 09:42:59 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 09 09:42:59 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 09 09:42:59 localhost systemd[1]: Reached target Remote File Systems.
Dec 09 09:42:59 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 09 09:42:59 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 09 09:42:59 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec 09 09:42:59 localhost systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Dec 09 09:42:59 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec 09 09:42:59 localhost systemd[1]: Mounting /sysroot...
Dec 09 09:43:00 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 09 09:43:00 localhost kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec 09 09:43:00 localhost kernel: XFS (vda1): Ending clean mount
Dec 09 09:43:00 localhost systemd[1]: Mounted /sysroot.
Dec 09 09:43:00 localhost systemd[1]: Reached target Initrd Root File System.
Dec 09 09:43:00 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 09 09:43:00 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 09 09:43:00 localhost systemd[1]: Reached target Initrd File Systems.
Dec 09 09:43:00 localhost systemd[1]: Reached target Initrd Default Target.
Dec 09 09:43:00 localhost systemd[1]: Starting dracut mount hook...
Dec 09 09:43:00 localhost systemd[1]: Finished dracut mount hook.
Dec 09 09:43:00 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 09 09:43:00 localhost rpc.idmapd[448]: exiting on signal 15
Dec 09 09:43:00 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 09 09:43:00 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 09 09:43:00 localhost systemd[1]: Stopped target Network.
Dec 09 09:43:00 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 09 09:43:00 localhost systemd[1]: Stopped target Timer Units.
Dec 09 09:43:00 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 09 09:43:00 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 09 09:43:00 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 09 09:43:00 localhost systemd[1]: Stopped target Basic System.
Dec 09 09:43:00 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 09 09:43:00 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 09 09:43:00 localhost systemd[1]: Stopped target Path Units.
Dec 09 09:43:00 localhost systemd[1]: Stopped target Remote File Systems.
Dec 09 09:43:00 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 09 09:43:00 localhost systemd[1]: Stopped target Slice Units.
Dec 09 09:43:00 localhost systemd[1]: Stopped target Socket Units.
Dec 09 09:43:00 localhost systemd[1]: Stopped target System Initialization.
Dec 09 09:43:00 localhost systemd[1]: Stopped target Local File Systems.
Dec 09 09:43:00 localhost systemd[1]: Stopped target Swaps.
Dec 09 09:43:00 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Stopped dracut mount hook.
Dec 09 09:43:00 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 09 09:43:00 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 09 09:43:00 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 09 09:43:00 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 09 09:43:00 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 09 09:43:00 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 09 09:43:00 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 09 09:43:00 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 09 09:43:00 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 09 09:43:00 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 09 09:43:00 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 09 09:43:00 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Closed udev Control Socket.
Dec 09 09:43:00 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Closed udev Kernel Socket.
Dec 09 09:43:00 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 09 09:43:00 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 09 09:43:00 localhost systemd[1]: Starting Cleanup udev Database...
Dec 09 09:43:00 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 09 09:43:00 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 09 09:43:00 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Stopped Create System Users.
Dec 09 09:43:00 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 09 09:43:00 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 09 09:43:00 localhost systemd[1]: Finished Cleanup udev Database.
Dec 09 09:43:00 localhost systemd[1]: Reached target Switch Root.
Dec 09 09:43:00 localhost systemd[1]: Starting Switch Root...
Dec 09 09:43:00 localhost systemd[1]: Switching root.
Dec 09 09:43:00 localhost systemd-journald[306]: Received SIGTERM from PID 1 (systemd).
Dec 09 09:43:00 localhost systemd-journald[306]: Journal stopped
Dec 09 09:43:01 localhost kernel: audit: type=1404 audit(1765273381.005:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 09 09:43:01 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 09:43:01 localhost kernel: SELinux:  policy capability open_perms=1
Dec 09 09:43:01 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 09:43:01 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 09 09:43:01 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 09:43:01 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 09:43:01 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 09:43:01 localhost kernel: audit: type=1403 audit(1765273381.130:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 09 09:43:01 localhost systemd[1]: Successfully loaded SELinux policy in 127.382ms.
Dec 09 09:43:01 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.873ms.
Dec 09 09:43:01 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 09 09:43:01 localhost systemd[1]: Detected virtualization kvm.
Dec 09 09:43:01 localhost systemd[1]: Detected architecture x86-64.
Dec 09 09:43:01 localhost systemd-rc-local-generator[636]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 09:43:01 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 09 09:43:01 localhost systemd[1]: Stopped Switch Root.
Dec 09 09:43:01 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 09 09:43:01 localhost systemd[1]: Created slice Slice /system/getty.
Dec 09 09:43:01 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 09 09:43:01 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 09 09:43:01 localhost systemd[1]: Created slice User and Session Slice.
Dec 09 09:43:01 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 09 09:43:01 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 09 09:43:01 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 09 09:43:01 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 09 09:43:01 localhost systemd[1]: Stopped target Switch Root.
Dec 09 09:43:01 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 09 09:43:01 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 09 09:43:01 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 09 09:43:01 localhost systemd[1]: Reached target Path Units.
Dec 09 09:43:01 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 09 09:43:01 localhost systemd[1]: Reached target Slice Units.
Dec 09 09:43:01 localhost systemd[1]: Reached target Swaps.
Dec 09 09:43:01 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 09 09:43:01 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 09 09:43:01 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 09 09:43:01 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 09 09:43:01 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 09 09:43:01 localhost systemd[1]: Listening on udev Control Socket.
Dec 09 09:43:01 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 09 09:43:01 localhost systemd[1]: Mounting Huge Pages File System...
Dec 09 09:43:01 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 09 09:43:01 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 09 09:43:01 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 09 09:43:01 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 09 09:43:01 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 09 09:43:01 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 09 09:43:01 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 09 09:43:01 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 09 09:43:01 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 09 09:43:01 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 09 09:43:01 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 09 09:43:01 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 09 09:43:01 localhost systemd[1]: Stopped Journal Service.
Dec 09 09:43:01 localhost systemd[1]: Starting Journal Service...
Dec 09 09:43:01 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 09 09:43:01 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 09 09:43:01 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 09 09:43:01 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 09 09:43:01 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 09 09:43:01 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 09 09:43:01 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 09 09:43:01 localhost kernel: fuse: init (API version 7.37)
Dec 09 09:43:01 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 09 09:43:01 localhost systemd-journald[677]: Journal started
Dec 09 09:43:01 localhost systemd-journald[677]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 09 09:43:01 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 09 09:43:01 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 09 09:43:01 localhost systemd[1]: Mounted Huge Pages File System.
Dec 09 09:43:01 localhost systemd[1]: Started Journal Service.
Dec 09 09:43:01 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 09 09:43:01 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 09 09:43:01 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 09 09:43:01 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 09 09:43:01 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 09 09:43:01 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 09 09:43:01 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 09 09:43:01 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 09 09:43:01 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 09 09:43:01 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 09 09:43:01 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 09 09:43:01 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 09 09:43:01 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 09 09:43:01 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 09 09:43:01 localhost kernel: ACPI: bus type drm_connector registered
Dec 09 09:43:01 localhost systemd[1]: Mounting FUSE Control File System...
Dec 09 09:43:01 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 09 09:43:01 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 09 09:43:01 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 09 09:43:01 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 09 09:43:01 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 09 09:43:01 localhost systemd[1]: Starting Create System Users...
Dec 09 09:43:01 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 09 09:43:01 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 09 09:43:01 localhost systemd-journald[677]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec 09 09:43:01 localhost systemd-journald[677]: Received client request to flush runtime journal.
Dec 09 09:43:01 localhost systemd[1]: Mounted FUSE Control File System.
Dec 09 09:43:01 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 09 09:43:01 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 09 09:43:01 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 09 09:43:01 localhost systemd[1]: Finished Create System Users.
Dec 09 09:43:01 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 09 09:43:01 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 09 09:43:01 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 09 09:43:01 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 09 09:43:01 localhost systemd[1]: Reached target Local File Systems.
Dec 09 09:43:01 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 09 09:43:01 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 09 09:43:01 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 09 09:43:01 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 09 09:43:01 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 09 09:43:01 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 09 09:43:01 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 09 09:43:01 localhost bootctl[695]: Couldn't find EFI system partition, skipping.
Dec 09 09:43:01 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 09 09:43:01 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 09 09:43:01 localhost systemd[1]: Starting Security Auditing Service...
Dec 09 09:43:01 localhost systemd[1]: Starting RPC Bind...
Dec 09 09:43:01 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 09 09:43:01 localhost auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 09 09:43:01 localhost auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 09 09:43:01 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 09 09:43:01 localhost systemd[1]: Started RPC Bind.
Dec 09 09:43:02 localhost augenrules[706]: /sbin/augenrules: No change
Dec 09 09:43:02 localhost augenrules[721]: No rules
Dec 09 09:43:02 localhost augenrules[721]: enabled 1
Dec 09 09:43:02 localhost augenrules[721]: failure 1
Dec 09 09:43:02 localhost augenrules[721]: pid 700
Dec 09 09:43:02 localhost augenrules[721]: rate_limit 0
Dec 09 09:43:02 localhost augenrules[721]: backlog_limit 8192
Dec 09 09:43:02 localhost augenrules[721]: lost 0
Dec 09 09:43:02 localhost augenrules[721]: backlog 0
Dec 09 09:43:02 localhost augenrules[721]: backlog_wait_time 60000
Dec 09 09:43:02 localhost augenrules[721]: backlog_wait_time_actual 0
Dec 09 09:43:02 localhost augenrules[721]: enabled 1
Dec 09 09:43:02 localhost augenrules[721]: failure 1
Dec 09 09:43:02 localhost augenrules[721]: pid 700
Dec 09 09:43:02 localhost augenrules[721]: rate_limit 0
Dec 09 09:43:02 localhost augenrules[721]: backlog_limit 8192
Dec 09 09:43:02 localhost augenrules[721]: lost 0
Dec 09 09:43:02 localhost augenrules[721]: backlog 3
Dec 09 09:43:02 localhost augenrules[721]: backlog_wait_time 60000
Dec 09 09:43:02 localhost augenrules[721]: backlog_wait_time_actual 0
Dec 09 09:43:02 localhost augenrules[721]: enabled 1
Dec 09 09:43:02 localhost augenrules[721]: failure 1
Dec 09 09:43:02 localhost augenrules[721]: pid 700
Dec 09 09:43:02 localhost augenrules[721]: rate_limit 0
Dec 09 09:43:02 localhost augenrules[721]: backlog_limit 8192
Dec 09 09:43:02 localhost augenrules[721]: lost 0
Dec 09 09:43:02 localhost augenrules[721]: backlog 0
Dec 09 09:43:02 localhost augenrules[721]: backlog_wait_time 60000
Dec 09 09:43:02 localhost augenrules[721]: backlog_wait_time_actual 0
Dec 09 09:43:02 localhost systemd[1]: Started Security Auditing Service.
Dec 09 09:43:02 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 09 09:43:02 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 09 09:43:02 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 09 09:43:02 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 09 09:43:02 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 09 09:43:02 localhost systemd[1]: Starting Update is Completed...
Dec 09 09:43:02 localhost systemd[1]: Finished Update is Completed.
Dec 09 09:43:02 localhost systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Dec 09 09:43:02 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 09 09:43:02 localhost systemd[1]: Reached target System Initialization.
Dec 09 09:43:02 localhost systemd[1]: Started dnf makecache --timer.
Dec 09 09:43:02 localhost systemd[1]: Started Daily rotation of log files.
Dec 09 09:43:02 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 09 09:43:02 localhost systemd[1]: Reached target Timer Units.
Dec 09 09:43:02 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 09 09:43:02 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 09 09:43:02 localhost systemd[1]: Reached target Socket Units.
Dec 09 09:43:02 localhost systemd-udevd[738]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 09:43:02 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 09 09:43:02 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 09 09:43:02 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 09 09:43:02 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 09 09:43:02 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 09 09:43:02 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 09 09:43:02 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 09 09:43:02 localhost systemd[1]: Reached target Basic System.
Dec 09 09:43:02 localhost dbus-broker-lau[768]: Ready
Dec 09 09:43:02 localhost systemd[1]: Starting NTP client/server...
Dec 09 09:43:02 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 09 09:43:02 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 09 09:43:02 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 09 09:43:02 localhost chronyd[784]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 09 09:43:02 localhost chronyd[784]: Loaded 0 symmetric keys
Dec 09 09:43:02 localhost chronyd[784]: Using right/UTC timezone to obtain leap second data
Dec 09 09:43:02 localhost chronyd[784]: Loaded seccomp filter (level 2)
Dec 09 09:43:02 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 09 09:43:02 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 09 09:43:02 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 09 09:43:02 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 09 09:43:02 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 09 09:43:02 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 09 09:43:02 localhost kernel: kvm_amd: TSC scaling supported
Dec 09 09:43:02 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 09 09:43:02 localhost kernel: kvm_amd: Nested Paging enabled
Dec 09 09:43:02 localhost kernel: kvm_amd: LBR virtualization supported
Dec 09 09:43:02 localhost kernel: Console: switching to colour dummy device 80x25
Dec 09 09:43:02 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 09 09:43:02 localhost kernel: [drm] features: -context_init
Dec 09 09:43:02 localhost kernel: [drm] number of scanouts: 1
Dec 09 09:43:02 localhost kernel: [drm] number of cap sets: 0
Dec 09 09:43:02 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 09 09:43:02 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 09 09:43:02 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 09 09:43:02 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 09 09:43:02 localhost systemd[1]: Started irqbalance daemon.
Dec 09 09:43:02 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 09 09:43:02 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 09 09:43:02 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 09 09:43:02 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 09 09:43:02 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 09 09:43:02 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 09 09:43:02 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 09 09:43:02 localhost systemd[1]: Starting User Login Management...
Dec 09 09:43:02 localhost systemd[1]: Started NTP client/server.
Dec 09 09:43:02 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 09 09:43:02 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 09 09:43:02 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 09 09:43:02 localhost systemd-logind[806]: New seat seat0.
Dec 09 09:43:02 localhost systemd-logind[806]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 09 09:43:02 localhost systemd-logind[806]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 09 09:43:02 localhost systemd[1]: Started User Login Management.
Dec 09 09:43:02 localhost iptables.init[785]: iptables: Applying firewall rules: [  OK  ]
Dec 09 09:43:02 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 09 09:43:03 localhost cloud-init[837]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 09 Dec 2025 09:43:03 +0000. Up 7.91 seconds.
Dec 09 09:43:03 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 09 09:43:03 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 09 09:43:03 localhost systemd[1]: run-cloud\x2dinit-tmp-tmp0n831rvl.mount: Deactivated successfully.
Dec 09 09:43:03 localhost systemd[1]: Starting Hostname Service...
Dec 09 09:43:03 localhost systemd[1]: Started Hostname Service.
Dec 09 09:43:03 np0005551604.novalocal systemd-hostnamed[851]: Hostname set to <np0005551604.novalocal> (static)
Dec 09 09:43:03 np0005551604.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 09 09:43:03 np0005551604.novalocal systemd[1]: Reached target Preparation for Network.
Dec 09 09:43:03 np0005551604.novalocal systemd[1]: Starting Network Manager...
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6115] NetworkManager (version 1.54.2-1.el9) is starting... (boot:f43569a1-1096-4e67-91b2-bda287c55398)
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6118] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6175] manager[0x5653cee44000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6219] hostname: hostname: using hostnamed
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6219] hostname: static hostname changed from (none) to "np0005551604.novalocal"
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6223] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6320] manager[0x5653cee44000]: rfkill: Wi-Fi hardware radio set enabled
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6320] manager[0x5653cee44000]: rfkill: WWAN hardware radio set enabled
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6351] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6351] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6352] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6352] manager: Networking is enabled by state file
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6354] settings: Loaded settings plugin: keyfile (internal)
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6362] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6379] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6389] dhcp: init: Using DHCP client 'internal'
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6391] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6401] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6407] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 09 09:43:03 np0005551604.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6414] device (lo): Activation: starting connection 'lo' (4d2460cc-3851-4697-811d-bb6085f75db6)
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6421] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6423] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6446] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6449] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6451] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6453] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6455] device (eth0): carrier: link connected
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6458] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6464] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6469] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6475] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6476] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6478] manager: NetworkManager state is now CONNECTING
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6480] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6485] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6487] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 09:43:03 np0005551604.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 09:43:03 np0005551604.novalocal systemd[1]: Started Network Manager.
Dec 09 09:43:03 np0005551604.novalocal systemd[1]: Reached target Network.
Dec 09 09:43:03 np0005551604.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 09 09:43:03 np0005551604.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 09 09:43:03 np0005551604.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6820] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6823] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 09 09:43:03 np0005551604.novalocal NetworkManager[856]: <info>  [1765273383.6828] device (lo): Activation: successful, device activated.
Dec 09 09:43:03 np0005551604.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 09 09:43:03 np0005551604.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 09 09:43:03 np0005551604.novalocal systemd[1]: Reached target NFS client services.
Dec 09 09:43:03 np0005551604.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 09 09:43:03 np0005551604.novalocal systemd[1]: Reached target Remote File Systems.
Dec 09 09:43:03 np0005551604.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 09 09:43:05 np0005551604.novalocal NetworkManager[856]: <info>  [1765273385.1497] dhcp4 (eth0): state changed new lease, address=38.102.83.201
Dec 09 09:43:05 np0005551604.novalocal NetworkManager[856]: <info>  [1765273385.1510] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 09 09:43:05 np0005551604.novalocal NetworkManager[856]: <info>  [1765273385.1533] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 09:43:05 np0005551604.novalocal NetworkManager[856]: <info>  [1765273385.1579] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 09:43:05 np0005551604.novalocal NetworkManager[856]: <info>  [1765273385.1581] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 09:43:05 np0005551604.novalocal NetworkManager[856]: <info>  [1765273385.1584] manager: NetworkManager state is now CONNECTED_SITE
Dec 09 09:43:05 np0005551604.novalocal NetworkManager[856]: <info>  [1765273385.1587] device (eth0): Activation: successful, device activated.
Dec 09 09:43:05 np0005551604.novalocal NetworkManager[856]: <info>  [1765273385.1592] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 09 09:43:05 np0005551604.novalocal NetworkManager[856]: <info>  [1765273385.1595] manager: startup complete
Dec 09 09:43:05 np0005551604.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 09 09:43:05 np0005551604.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 09 Dec 2025 09:43:05 +0000. Up 10.30 seconds.
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.201         | 255.255.255.0 | global | fa:16:3e:10:2e:e6 |
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fe10:2ee6/64 |       .       |  link  | fa:16:3e:10:2e:e6 |
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Dec 09 09:43:05 np0005551604.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 09 09:43:06 np0005551604.novalocal useradd[989]: new group: name=cloud-user, GID=1001
Dec 09 09:43:06 np0005551604.novalocal useradd[989]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 09 09:43:06 np0005551604.novalocal useradd[989]: add 'cloud-user' to group 'adm'
Dec 09 09:43:06 np0005551604.novalocal useradd[989]: add 'cloud-user' to group 'systemd-journal'
Dec 09 09:43:06 np0005551604.novalocal useradd[989]: add 'cloud-user' to shadow group 'adm'
Dec 09 09:43:06 np0005551604.novalocal useradd[989]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: Generating public/private rsa key pair.
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: The key fingerprint is:
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: SHA256:8x6ylHaRMbQ+jPb+nRI56qR2RqxhnGZJeFEu4+cG5C8 root@np0005551604.novalocal
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: The key's randomart image is:
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: +---[RSA 3072]----+
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |         .o      |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |        .o .     |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |       .+.=      |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |      .+o* +     |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |       +S+B  .   |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |       .OXoo+    |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |       +E+X. o   |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |       oo@+... . |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |       .o++...o  |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: +----[SHA256]-----+
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: Generating public/private ecdsa key pair.
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: The key fingerprint is:
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: SHA256:q7BPSI9W8xkqDocR514/r20AaKBWJfXCAR/YXH8d4y4 root@np0005551604.novalocal
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: The key's randomart image is:
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: +---[ECDSA 256]---+
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |   oB=..     o   |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |  .o+o+ .   o o  |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: | .o..= . . . o   |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |.. +o o   . .    |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |. ..o +.S  E .   |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |   = * =.+  .    |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |  o O + *.       |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |   = = . +.      |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |    o.o .oo      |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: +----[SHA256]-----+
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: Generating public/private ed25519 key pair.
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: The key fingerprint is:
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: SHA256:nk0BJT+6pnFEBUlVFVFdgAs/mr/6sbEKjFwlpRSlZ6I root@np0005551604.novalocal
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: The key's randomart image is:
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: +--[ED25519 256]--+
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |       .BB*..o===|
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |       ..O. .   .|
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |        * Bo .   |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |       o B o+    |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |      E S .o .   |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |     . * =o      |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |      + O ..o    |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |       = .  .=   |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: |      .   o+=.   |
Dec 09 09:43:06 np0005551604.novalocal cloud-init[921]: +----[SHA256]-----+
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Reached target Network is Online.
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Starting System Logging Service...
Dec 09 09:43:07 np0005551604.novalocal sm-notify[1005]: Version 2.5.4 starting
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Starting Permit User Sessions...
Dec 09 09:43:07 np0005551604.novalocal sshd[1007]: Server listening on 0.0.0.0 port 22.
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 09 09:43:07 np0005551604.novalocal sshd[1007]: Server listening on :: port 22.
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Finished Permit User Sessions.
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Started Command Scheduler.
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Started Getty on tty1.
Dec 09 09:43:07 np0005551604.novalocal crond[1010]: (CRON) STARTUP (1.5.7)
Dec 09 09:43:07 np0005551604.novalocal crond[1010]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 09 09:43:07 np0005551604.novalocal crond[1010]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 84% if used.)
Dec 09 09:43:07 np0005551604.novalocal crond[1010]: (CRON) INFO (running with inotify support)
Dec 09 09:43:07 np0005551604.novalocal rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Dec 09 09:43:07 np0005551604.novalocal rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Reached target Login Prompts.
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Started System Logging Service.
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Reached target Multi-User System.
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 09 09:43:07 np0005551604.novalocal rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 09:43:07 np0005551604.novalocal kdumpctl[1019]: kdump: No kdump initial ramdisk found.
Dec 09 09:43:07 np0005551604.novalocal kdumpctl[1019]: kdump: Rebuilding /boot/initramfs-5.14.0-648.el9.x86_64kdump.img
Dec 09 09:43:07 np0005551604.novalocal cloud-init[1110]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 09 Dec 2025 09:43:07 +0000. Up 12.17 seconds.
Dec 09 09:43:07 np0005551604.novalocal sshd-session[1134]: Connection reset by 38.102.83.114 port 48314 [preauth]
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Dec 09 09:43:07 np0005551604.novalocal sshd-session[1143]: Unable to negotiate with 38.102.83.114 port 48326: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Dec 09 09:43:07 np0005551604.novalocal sshd-session[1160]: Unable to negotiate with 38.102.83.114 port 48340: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 09 09:43:07 np0005551604.novalocal sshd-session[1166]: Unable to negotiate with 38.102.83.114 port 48350: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 09 09:43:07 np0005551604.novalocal sshd-session[1183]: Connection reset by 38.102.83.114 port 48354 [preauth]
Dec 09 09:43:07 np0005551604.novalocal sshd-session[1218]: Unable to negotiate with 38.102.83.114 port 48386: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 09 09:43:07 np0005551604.novalocal sshd-session[1151]: Connection closed by 38.102.83.114 port 48332 [preauth]
Dec 09 09:43:07 np0005551604.novalocal sshd-session[1230]: Unable to negotiate with 38.102.83.114 port 48394: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 09 09:43:07 np0005551604.novalocal sshd-session[1199]: Connection closed by 38.102.83.114 port 48370 [preauth]
Dec 09 09:43:07 np0005551604.novalocal dracut[1284]: dracut-057-102.git20250818.el9
Dec 09 09:43:07 np0005551604.novalocal cloud-init[1302]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 09 Dec 2025 09:43:07 +0000. Up 12.54 seconds.
Dec 09 09:43:07 np0005551604.novalocal cloud-init[1304]: #############################################################
Dec 09 09:43:07 np0005551604.novalocal cloud-init[1309]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 09 09:43:07 np0005551604.novalocal cloud-init[1316]: 256 SHA256:q7BPSI9W8xkqDocR514/r20AaKBWJfXCAR/YXH8d4y4 root@np0005551604.novalocal (ECDSA)
Dec 09 09:43:07 np0005551604.novalocal dracut[1287]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-648.el9.x86_64kdump.img 5.14.0-648.el9.x86_64
Dec 09 09:43:07 np0005551604.novalocal cloud-init[1324]: 256 SHA256:nk0BJT+6pnFEBUlVFVFdgAs/mr/6sbEKjFwlpRSlZ6I root@np0005551604.novalocal (ED25519)
Dec 09 09:43:07 np0005551604.novalocal cloud-init[1331]: 3072 SHA256:8x6ylHaRMbQ+jPb+nRI56qR2RqxhnGZJeFEu4+cG5C8 root@np0005551604.novalocal (RSA)
Dec 09 09:43:07 np0005551604.novalocal cloud-init[1333]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 09 09:43:07 np0005551604.novalocal cloud-init[1335]: #############################################################
Dec 09 09:43:07 np0005551604.novalocal cloud-init[1302]: Cloud-init v. 24.4-7.el9 finished at Tue, 09 Dec 2025 09:43:07 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 12.71 seconds
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Dec 09 09:43:07 np0005551604.novalocal systemd[1]: Reached target Cloud-init target.
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: memstrack is not available
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 09 09:43:08 np0005551604.novalocal dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: memstrack is not available
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: *** Including module: systemd ***
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: *** Including module: fips ***
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: *** Including module: systemd-initrd ***
Dec 09 09:43:09 np0005551604.novalocal dracut[1287]: *** Including module: i18n ***
Dec 09 09:43:10 np0005551604.novalocal dracut[1287]: *** Including module: drm ***
Dec 09 09:43:10 np0005551604.novalocal chronyd[784]: Selected source 162.159.200.123 (2.centos.pool.ntp.org)
Dec 09 09:43:10 np0005551604.novalocal chronyd[784]: System clock TAI offset set to 37 seconds
Dec 09 09:43:10 np0005551604.novalocal dracut[1287]: *** Including module: prefixdevname ***
Dec 09 09:43:10 np0005551604.novalocal dracut[1287]: *** Including module: kernel-modules ***
Dec 09 09:43:10 np0005551604.novalocal kernel: block vda: the capability attribute has been deprecated.
Dec 09 09:43:11 np0005551604.novalocal dracut[1287]: *** Including module: kernel-modules-extra ***
Dec 09 09:43:11 np0005551604.novalocal dracut[1287]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 09 09:43:11 np0005551604.novalocal dracut[1287]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 09 09:43:11 np0005551604.novalocal dracut[1287]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 09 09:43:11 np0005551604.novalocal dracut[1287]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 09 09:43:11 np0005551604.novalocal dracut[1287]: *** Including module: qemu ***
Dec 09 09:43:11 np0005551604.novalocal dracut[1287]: *** Including module: fstab-sys ***
Dec 09 09:43:11 np0005551604.novalocal dracut[1287]: *** Including module: rootfs-block ***
Dec 09 09:43:11 np0005551604.novalocal dracut[1287]: *** Including module: terminfo ***
Dec 09 09:43:11 np0005551604.novalocal dracut[1287]: *** Including module: udev-rules ***
Dec 09 09:43:12 np0005551604.novalocal dracut[1287]: Skipping udev rule: 91-permissions.rules
Dec 09 09:43:12 np0005551604.novalocal dracut[1287]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 09 09:43:12 np0005551604.novalocal dracut[1287]: *** Including module: virtiofs ***
Dec 09 09:43:12 np0005551604.novalocal dracut[1287]: *** Including module: dracut-systemd ***
Dec 09 09:43:12 np0005551604.novalocal dracut[1287]: *** Including module: usrmount ***
Dec 09 09:43:12 np0005551604.novalocal dracut[1287]: *** Including module: base ***
Dec 09 09:43:12 np0005551604.novalocal dracut[1287]: *** Including module: fs-lib ***
Dec 09 09:43:12 np0005551604.novalocal dracut[1287]: *** Including module: kdumpbase ***
Dec 09 09:43:13 np0005551604.novalocal irqbalance[795]: Cannot change IRQ 35 affinity: Operation not permitted
Dec 09 09:43:13 np0005551604.novalocal irqbalance[795]: IRQ 35 affinity is now unmanaged
Dec 09 09:43:13 np0005551604.novalocal irqbalance[795]: Cannot change IRQ 33 affinity: Operation not permitted
Dec 09 09:43:13 np0005551604.novalocal irqbalance[795]: IRQ 33 affinity is now unmanaged
Dec 09 09:43:13 np0005551604.novalocal irqbalance[795]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 09 09:43:13 np0005551604.novalocal irqbalance[795]: IRQ 31 affinity is now unmanaged
Dec 09 09:43:13 np0005551604.novalocal irqbalance[795]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 09 09:43:13 np0005551604.novalocal irqbalance[795]: IRQ 28 affinity is now unmanaged
Dec 09 09:43:13 np0005551604.novalocal irqbalance[795]: Cannot change IRQ 34 affinity: Operation not permitted
Dec 09 09:43:13 np0005551604.novalocal irqbalance[795]: IRQ 34 affinity is now unmanaged
Dec 09 09:43:13 np0005551604.novalocal irqbalance[795]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 09 09:43:13 np0005551604.novalocal irqbalance[795]: IRQ 32 affinity is now unmanaged
Dec 09 09:43:13 np0005551604.novalocal irqbalance[795]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 09 09:43:13 np0005551604.novalocal irqbalance[795]: IRQ 30 affinity is now unmanaged
Dec 09 09:43:13 np0005551604.novalocal irqbalance[795]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 09 09:43:13 np0005551604.novalocal irqbalance[795]: IRQ 29 affinity is now unmanaged
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:   microcode_ctl module: mangling fw_dir
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: configuration "intel" is ignored
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]: *** Including module: openssl ***
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]: *** Including module: shutdown ***
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]: *** Including module: squash ***
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]: *** Including modules done ***
Dec 09 09:43:13 np0005551604.novalocal dracut[1287]: *** Installing kernel module dependencies ***
Dec 09 09:43:14 np0005551604.novalocal dracut[1287]: *** Installing kernel module dependencies done ***
Dec 09 09:43:14 np0005551604.novalocal dracut[1287]: *** Resolving executable dependencies ***
Dec 09 09:43:15 np0005551604.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 09:43:16 np0005551604.novalocal dracut[1287]: *** Resolving executable dependencies done ***
Dec 09 09:43:16 np0005551604.novalocal dracut[1287]: *** Generating early-microcode cpio image ***
Dec 09 09:43:16 np0005551604.novalocal dracut[1287]: *** Store current command line parameters ***
Dec 09 09:43:16 np0005551604.novalocal dracut[1287]: Stored kernel commandline:
Dec 09 09:43:16 np0005551604.novalocal dracut[1287]: No dracut internal kernel commandline stored in the initramfs
Dec 09 09:43:16 np0005551604.novalocal dracut[1287]: *** Install squash loader ***
Dec 09 09:43:17 np0005551604.novalocal dracut[1287]: *** Squashing the files inside the initramfs ***
Dec 09 09:43:18 np0005551604.novalocal dracut[1287]: *** Squashing the files inside the initramfs done ***
Dec 09 09:43:18 np0005551604.novalocal dracut[1287]: *** Creating image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' ***
Dec 09 09:43:18 np0005551604.novalocal dracut[1287]: *** Hardlinking files ***
Dec 09 09:43:18 np0005551604.novalocal dracut[1287]: Mode:           real
Dec 09 09:43:18 np0005551604.novalocal dracut[1287]: Files:          50
Dec 09 09:43:19 np0005551604.novalocal dracut[1287]: Linked:         0 files
Dec 09 09:43:19 np0005551604.novalocal dracut[1287]: Compared:       0 xattrs
Dec 09 09:43:19 np0005551604.novalocal dracut[1287]: Compared:       0 files
Dec 09 09:43:19 np0005551604.novalocal dracut[1287]: Saved:          0 B
Dec 09 09:43:19 np0005551604.novalocal dracut[1287]: Duration:       0.000866 seconds
Dec 09 09:43:19 np0005551604.novalocal dracut[1287]: *** Hardlinking files done ***
Dec 09 09:43:19 np0005551604.novalocal dracut[1287]: *** Creating initramfs image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' done ***
Dec 09 09:43:20 np0005551604.novalocal kdumpctl[1019]: kdump: kexec: loaded kdump kernel
Dec 09 09:43:20 np0005551604.novalocal kdumpctl[1019]: kdump: Starting kdump: [OK]
Dec 09 09:43:20 np0005551604.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 09 09:43:20 np0005551604.novalocal systemd[1]: Startup finished in 2.829s (kernel) + 3.019s (initrd) + 19.960s (userspace) = 25.808s.
Dec 09 09:43:33 np0005551604.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 09 09:43:45 np0005551604.novalocal sshd-session[4297]: Accepted publickey for zuul from 38.102.83.114 port 33620 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 09 09:43:45 np0005551604.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 09 09:43:45 np0005551604.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 09 09:43:45 np0005551604.novalocal systemd-logind[806]: New session 1 of user zuul.
Dec 09 09:43:45 np0005551604.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 09 09:43:45 np0005551604.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 09 09:43:45 np0005551604.novalocal systemd[4301]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 09:43:45 np0005551604.novalocal systemd[4301]: Queued start job for default target Main User Target.
Dec 09 09:43:45 np0005551604.novalocal systemd[4301]: Created slice User Application Slice.
Dec 09 09:43:45 np0005551604.novalocal systemd[4301]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 09 09:43:45 np0005551604.novalocal systemd[4301]: Started Daily Cleanup of User's Temporary Directories.
Dec 09 09:43:45 np0005551604.novalocal systemd[4301]: Reached target Paths.
Dec 09 09:43:45 np0005551604.novalocal systemd[4301]: Reached target Timers.
Dec 09 09:43:45 np0005551604.novalocal systemd[4301]: Starting D-Bus User Message Bus Socket...
Dec 09 09:43:45 np0005551604.novalocal systemd[4301]: Starting Create User's Volatile Files and Directories...
Dec 09 09:43:45 np0005551604.novalocal systemd[4301]: Finished Create User's Volatile Files and Directories.
Dec 09 09:43:45 np0005551604.novalocal systemd[4301]: Listening on D-Bus User Message Bus Socket.
Dec 09 09:43:45 np0005551604.novalocal systemd[4301]: Reached target Sockets.
Dec 09 09:43:45 np0005551604.novalocal systemd[4301]: Reached target Basic System.
Dec 09 09:43:45 np0005551604.novalocal systemd[4301]: Reached target Main User Target.
Dec 09 09:43:45 np0005551604.novalocal systemd[4301]: Startup finished in 118ms.
Dec 09 09:43:45 np0005551604.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 09 09:43:45 np0005551604.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 09 09:43:45 np0005551604.novalocal sshd-session[4297]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 09:43:46 np0005551604.novalocal python3[4383]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 09:43:48 np0005551604.novalocal python3[4411]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 09:43:55 np0005551604.novalocal python3[4469]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 09:43:56 np0005551604.novalocal python3[4509]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 09 09:43:58 np0005551604.novalocal python3[4535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfkcz+sdBC5Hc6a3qciBGOfVToJT+Vi5tHJjyssf7GAu8+GUwHSBRHjCzVaRVCv34TNjQ0KR1a8RsTfkO5SOcTPVfafZ5Z/VdIy6+tlxb46kLefLVVzxfQCOsF1HmJvVAySMCNdoQ+/P72lP//rFYh61NHxYgXRFafnxyaoaOZ1c8sVTb5YaLKtOjNXEsdLedgrvPEcxUU5XZc7+KOaUKZmomh8rrCMDgTiLhX9N8mH5bOhO9jI3VPtGvuOSko8ccfWS4U39k5QeO1v6LIowwnF92n8KQk/gPnQ9fC8wl30ZAbVA82lhOIHOGhwfc1SpES3TYhycJVdlC1jaH2/g6Pq9QQDtFVl3Q88XPXdxi9ek1mE+VpQCFYkIs01tWM1J7YQdF8qhvsNcNB4MpecSt4pQWmAzo6qjnv0pWndbINJZbLQmUkHsy70K3iSMg1izw6a9CwGeKfJ95TGS5Q6OA5wuzhKqi8vB5NcEyGG6dm1BtCG5MxlKlmEN1dO7wGlaM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:43:58 np0005551604.novalocal python3[4559]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:43:59 np0005551604.novalocal python3[4658]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:43:59 np0005551604.novalocal python3[4729]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765273438.6523213-207-66674816575962/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=951b2216678a4038aa595858720b060b_id_rsa follow=False checksum=7fbdc97ee47d482a9627e7e4b08e66077527b1cf backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:43:59 np0005551604.novalocal python3[4852]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:44:00 np0005551604.novalocal python3[4923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765273439.5455399-240-273145969724459/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=951b2216678a4038aa595858720b060b_id_rsa.pub follow=False checksum=cfe72fad0635b201e54e2478e069e3b356b4f8c7 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:44:01 np0005551604.novalocal python3[4971]: ansible-ping Invoked with data=pong
Dec 09 09:44:02 np0005551604.novalocal python3[4995]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 09:44:04 np0005551604.novalocal python3[5053]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 09 09:44:05 np0005551604.novalocal python3[5085]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:44:05 np0005551604.novalocal python3[5109]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:44:05 np0005551604.novalocal python3[5133]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:44:05 np0005551604.novalocal python3[5157]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:44:06 np0005551604.novalocal python3[5181]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:44:06 np0005551604.novalocal python3[5205]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:44:07 np0005551604.novalocal sudo[5229]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvkdnzizyktvhcvrkyfbkipdeomwuubb ; /usr/bin/python3'
Dec 09 09:44:07 np0005551604.novalocal sudo[5229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:44:08 np0005551604.novalocal python3[5231]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:44:08 np0005551604.novalocal sudo[5229]: pam_unix(sudo:session): session closed for user root
Dec 09 09:44:08 np0005551604.novalocal sudo[5307]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpdxlvwkkdafvnzawxryecmccszejhxa ; /usr/bin/python3'
Dec 09 09:44:08 np0005551604.novalocal sudo[5307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:44:08 np0005551604.novalocal python3[5309]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:44:08 np0005551604.novalocal sudo[5307]: pam_unix(sudo:session): session closed for user root
Dec 09 09:44:08 np0005551604.novalocal sudo[5380]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhbwjwstdlpvtvgzuvahxagbmttxellr ; /usr/bin/python3'
Dec 09 09:44:08 np0005551604.novalocal sudo[5380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:44:09 np0005551604.novalocal python3[5382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765273448.157397-21-60973359415278/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:44:09 np0005551604.novalocal sudo[5380]: pam_unix(sudo:session): session closed for user root
Dec 09 09:44:09 np0005551604.novalocal python3[5430]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:09 np0005551604.novalocal python3[5454]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:10 np0005551604.novalocal python3[5478]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:10 np0005551604.novalocal python3[5502]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:10 np0005551604.novalocal python3[5526]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:11 np0005551604.novalocal python3[5550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:11 np0005551604.novalocal python3[5574]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:11 np0005551604.novalocal python3[5598]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:11 np0005551604.novalocal python3[5622]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:12 np0005551604.novalocal python3[5646]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:12 np0005551604.novalocal python3[5670]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:12 np0005551604.novalocal python3[5694]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:12 np0005551604.novalocal python3[5718]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:13 np0005551604.novalocal python3[5742]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:13 np0005551604.novalocal python3[5766]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:13 np0005551604.novalocal python3[5790]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:13 np0005551604.novalocal python3[5814]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:14 np0005551604.novalocal python3[5838]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:14 np0005551604.novalocal python3[5862]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:14 np0005551604.novalocal python3[5886]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:15 np0005551604.novalocal python3[5910]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:15 np0005551604.novalocal python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:15 np0005551604.novalocal python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:15 np0005551604.novalocal python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:16 np0005551604.novalocal python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:16 np0005551604.novalocal python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:44:19 np0005551604.novalocal sudo[6054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtevjhkegjzrhfdqbzsfmxgfyytqnxlj ; /usr/bin/python3'
Dec 09 09:44:19 np0005551604.novalocal sudo[6054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:44:19 np0005551604.novalocal python3[6056]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 09 09:44:19 np0005551604.novalocal systemd[1]: Starting Time & Date Service...
Dec 09 09:44:19 np0005551604.novalocal systemd[1]: Started Time & Date Service.
Dec 09 09:44:19 np0005551604.novalocal systemd-timedated[6058]: Changed time zone to 'UTC' (UTC).
Dec 09 09:44:19 np0005551604.novalocal sudo[6054]: pam_unix(sudo:session): session closed for user root
Dec 09 09:44:20 np0005551604.novalocal sudo[6085]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzpnusixmfyhdjwucximqqpyihkcvcem ; /usr/bin/python3'
Dec 09 09:44:20 np0005551604.novalocal sudo[6085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:44:20 np0005551604.novalocal python3[6087]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:44:20 np0005551604.novalocal sudo[6085]: pam_unix(sudo:session): session closed for user root
Dec 09 09:44:20 np0005551604.novalocal python3[6163]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:44:21 np0005551604.novalocal python3[6234]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765273460.4512272-153-13661873418982/source _original_basename=tmpu8075k11 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:44:21 np0005551604.novalocal python3[6334]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:44:21 np0005551604.novalocal python3[6405]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765273461.2445004-183-207328540469213/source _original_basename=tmp2eytex4x follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:44:22 np0005551604.novalocal sudo[6505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbuciqmtkmcjhomntrtjehhrvuoknqbq ; /usr/bin/python3'
Dec 09 09:44:22 np0005551604.novalocal sudo[6505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:44:22 np0005551604.novalocal python3[6507]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:44:22 np0005551604.novalocal sudo[6505]: pam_unix(sudo:session): session closed for user root
Dec 09 09:44:22 np0005551604.novalocal sudo[6578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xizjgkbfyalapiwrceqgjxkbpftysprg ; /usr/bin/python3'
Dec 09 09:44:22 np0005551604.novalocal sudo[6578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:44:22 np0005551604.novalocal python3[6580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765273462.3794315-231-25856136471834/source _original_basename=tmprkokdgb1 follow=False checksum=863ac53d108e41f2ca0bf1e77a656f71228bd1da backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:44:23 np0005551604.novalocal sudo[6578]: pam_unix(sudo:session): session closed for user root
Dec 09 09:44:23 np0005551604.novalocal python3[6628]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 09:44:24 np0005551604.novalocal python3[6654]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 09:44:24 np0005551604.novalocal sudo[6732]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adftcmrmajocmxwfetwwryoshtafnciu ; /usr/bin/python3'
Dec 09 09:44:24 np0005551604.novalocal sudo[6732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:44:24 np0005551604.novalocal python3[6734]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:44:24 np0005551604.novalocal sudo[6732]: pam_unix(sudo:session): session closed for user root
Dec 09 09:44:24 np0005551604.novalocal sudo[6805]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpsmvhsrmxjwyjqstvvnjdzcontymjni ; /usr/bin/python3'
Dec 09 09:44:24 np0005551604.novalocal sudo[6805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:44:24 np0005551604.novalocal python3[6807]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765273464.2072675-273-176116194027674/source _original_basename=tmp6h523_3_ follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:44:24 np0005551604.novalocal sudo[6805]: pam_unix(sudo:session): session closed for user root
Dec 09 09:44:25 np0005551604.novalocal sudo[6856]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aptjjugegfmvaecbtlquqocwqvarcwcy ; /usr/bin/python3'
Dec 09 09:44:25 np0005551604.novalocal sudo[6856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:44:25 np0005551604.novalocal python3[6858]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-f51e-2f7d-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 09:44:25 np0005551604.novalocal sudo[6856]: pam_unix(sudo:session): session closed for user root
Dec 09 09:44:26 np0005551604.novalocal python3[6886]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-f51e-2f7d-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 09 09:44:27 np0005551604.novalocal python3[6914]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:44:48 np0005551604.novalocal sudo[6938]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzagalctyuwldyklxwhgnloykegetzfn ; /usr/bin/python3'
Dec 09 09:44:48 np0005551604.novalocal sudo[6938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:44:49 np0005551604.novalocal python3[6940]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:44:49 np0005551604.novalocal sudo[6938]: pam_unix(sudo:session): session closed for user root
Dec 09 09:44:49 np0005551604.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 09 09:45:23 np0005551604.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 09 09:45:23 np0005551604.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 09 09:45:23 np0005551604.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 09 09:45:23 np0005551604.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 09 09:45:23 np0005551604.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 09 09:45:23 np0005551604.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 09 09:45:23 np0005551604.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 09 09:45:23 np0005551604.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 09 09:45:23 np0005551604.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 09 09:45:23 np0005551604.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 09 09:45:23 np0005551604.novalocal NetworkManager[856]: <info>  [1765273523.4844] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 09 09:45:23 np0005551604.novalocal systemd-udevd[6943]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 09:45:23 np0005551604.novalocal NetworkManager[856]: <info>  [1765273523.5015] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 09:45:23 np0005551604.novalocal NetworkManager[856]: <info>  [1765273523.5045] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 09 09:45:23 np0005551604.novalocal NetworkManager[856]: <info>  [1765273523.5048] device (eth1): carrier: link connected
Dec 09 09:45:23 np0005551604.novalocal NetworkManager[856]: <info>  [1765273523.5050] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 09 09:45:23 np0005551604.novalocal NetworkManager[856]: <info>  [1765273523.5057] policy: auto-activating connection 'Wired connection 1' (c9d71888-3b72-38d5-8bab-6a45e2651a1e)
Dec 09 09:45:23 np0005551604.novalocal NetworkManager[856]: <info>  [1765273523.5061] device (eth1): Activation: starting connection 'Wired connection 1' (c9d71888-3b72-38d5-8bab-6a45e2651a1e)
Dec 09 09:45:23 np0005551604.novalocal NetworkManager[856]: <info>  [1765273523.5062] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 09:45:23 np0005551604.novalocal NetworkManager[856]: <info>  [1765273523.5065] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 09:45:23 np0005551604.novalocal NetworkManager[856]: <info>  [1765273523.5069] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 09:45:23 np0005551604.novalocal NetworkManager[856]: <info>  [1765273523.5074] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 09 09:45:24 np0005551604.novalocal python3[6970]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-007f-1033-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 09:45:31 np0005551604.novalocal sudo[7048]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcbztevnwlailsnwgwtpxdgzzoxcnsal ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 09 09:45:31 np0005551604.novalocal sudo[7048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:45:31 np0005551604.novalocal python3[7050]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:45:31 np0005551604.novalocal sudo[7048]: pam_unix(sudo:session): session closed for user root
Dec 09 09:45:31 np0005551604.novalocal sudo[7121]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnjnmesptkwktojzmowwjhorkvykeltt ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 09 09:45:31 np0005551604.novalocal sudo[7121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:45:31 np0005551604.novalocal python3[7123]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765273531.1486325-102-267079009406212/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=f926591e6fe74292e839d01c93dcd1f97740fbb7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:45:31 np0005551604.novalocal sudo[7121]: pam_unix(sudo:session): session closed for user root
Dec 09 09:45:32 np0005551604.novalocal sudo[7171]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jckdmtdhxytqmdeivopqlebykepfradc ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 09 09:45:32 np0005551604.novalocal sudo[7171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:45:32 np0005551604.novalocal python3[7173]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 09:45:32 np0005551604.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 09 09:45:32 np0005551604.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 09 09:45:32 np0005551604.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[856]: <info>  [1765273532.7654] caught SIGTERM, shutting down normally.
Dec 09 09:45:32 np0005551604.novalocal systemd[1]: Stopping Network Manager...
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[856]: <info>  [1765273532.7665] dhcp4 (eth0): canceled DHCP transaction
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[856]: <info>  [1765273532.7665] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[856]: <info>  [1765273532.7666] dhcp4 (eth0): state changed no lease
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[856]: <info>  [1765273532.7669] manager: NetworkManager state is now CONNECTING
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[856]: <info>  [1765273532.7809] dhcp4 (eth1): canceled DHCP transaction
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[856]: <info>  [1765273532.7810] dhcp4 (eth1): state changed no lease
Dec 09 09:45:32 np0005551604.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[856]: <info>  [1765273532.7884] exiting (success)
Dec 09 09:45:32 np0005551604.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 09:45:32 np0005551604.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 09 09:45:32 np0005551604.novalocal systemd[1]: Stopped Network Manager.
Dec 09 09:45:32 np0005551604.novalocal systemd[1]: NetworkManager.service: Consumed 1.004s CPU time, 9.9M memory peak.
Dec 09 09:45:32 np0005551604.novalocal systemd[1]: Starting Network Manager...
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.8629] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:f43569a1-1096-4e67-91b2-bda287c55398)
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.8631] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.8719] manager[0x5636826db000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 09 09:45:32 np0005551604.novalocal systemd[1]: Starting Hostname Service...
Dec 09 09:45:32 np0005551604.novalocal systemd[1]: Started Hostname Service.
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9857] hostname: hostname: using hostnamed
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9859] hostname: static hostname changed from (none) to "np0005551604.novalocal"
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9864] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9868] manager[0x5636826db000]: rfkill: Wi-Fi hardware radio set enabled
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9868] manager[0x5636826db000]: rfkill: WWAN hardware radio set enabled
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9900] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9900] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9901] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9901] manager: Networking is enabled by state file
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9904] settings: Loaded settings plugin: keyfile (internal)
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9907] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9933] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9942] dhcp: init: Using DHCP client 'internal'
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9945] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9950] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9956] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9964] device (lo): Activation: starting connection 'lo' (4d2460cc-3851-4697-811d-bb6085f75db6)
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9971] device (eth0): carrier: link connected
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9975] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9980] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9981] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9987] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 09 09:45:32 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273532.9994] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0000] device (eth1): carrier: link connected
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0004] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0010] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (c9d71888-3b72-38d5-8bab-6a45e2651a1e) (indicated)
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0010] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0016] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0024] device (eth1): Activation: starting connection 'Wired connection 1' (c9d71888-3b72-38d5-8bab-6a45e2651a1e)
Dec 09 09:45:33 np0005551604.novalocal systemd[1]: Started Network Manager.
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0033] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0039] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0042] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0045] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0048] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0051] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0053] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0056] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0059] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0065] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0069] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0079] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0083] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0103] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0106] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0112] device (lo): Activation: successful, device activated.
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0120] dhcp4 (eth0): state changed new lease, address=38.102.83.201
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0128] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0195] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0227] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0230] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0234] manager: NetworkManager state is now CONNECTED_SITE
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0238] device (eth0): Activation: successful, device activated.
Dec 09 09:45:33 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273533.0244] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 09 09:45:33 np0005551604.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 09 09:45:33 np0005551604.novalocal sudo[7171]: pam_unix(sudo:session): session closed for user root
Dec 09 09:45:33 np0005551604.novalocal python3[7257]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-007f-1033-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 09:45:43 np0005551604.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 09:46:00 np0005551604.novalocal systemd[4301]: Starting Mark boot as successful...
Dec 09 09:46:00 np0005551604.novalocal systemd[4301]: Finished Mark boot as successful.
Dec 09 09:46:03 np0005551604.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.1564] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 09 09:46:18 np0005551604.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 09:46:18 np0005551604.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.1968] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.1972] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.1983] device (eth1): Activation: successful, device activated.
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.1994] manager: startup complete
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.1998] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <warn>  [1765273578.2007] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.2019] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 09 09:46:18 np0005551604.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.2128] dhcp4 (eth1): canceled DHCP transaction
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.2129] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.2129] dhcp4 (eth1): state changed no lease
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.2156] policy: auto-activating connection 'ci-private-network' (6b6a22e5-bf6a-510d-869a-e83c7a7cb57f)
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.2164] device (eth1): Activation: starting connection 'ci-private-network' (6b6a22e5-bf6a-510d-869a-e83c7a7cb57f)
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.2167] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.2175] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.2189] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.2209] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.2289] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.2293] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 09:46:18 np0005551604.novalocal NetworkManager[7184]: <info>  [1765273578.2306] device (eth1): Activation: successful, device activated.
Dec 09 09:46:28 np0005551604.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 09:46:33 np0005551604.novalocal sshd-session[4310]: Received disconnect from 38.102.83.114 port 33620:11: disconnected by user
Dec 09 09:46:33 np0005551604.novalocal sshd-session[4310]: Disconnected from user zuul 38.102.83.114 port 33620
Dec 09 09:46:33 np0005551604.novalocal sshd-session[4297]: pam_unix(sshd:session): session closed for user zuul
Dec 09 09:46:33 np0005551604.novalocal systemd-logind[806]: Session 1 logged out. Waiting for processes to exit.
Dec 09 09:46:33 np0005551604.novalocal sshd-session[7286]: Accepted publickey for zuul from 38.102.83.114 port 60506 ssh2: RSA SHA256:OoA6ymXz1bGWu/N8aYc4tZBvI5ffrgdXcLpAm+SU/Q8
Dec 09 09:46:33 np0005551604.novalocal systemd-logind[806]: New session 3 of user zuul.
Dec 09 09:46:33 np0005551604.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 09 09:46:33 np0005551604.novalocal sshd-session[7286]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 09:46:33 np0005551604.novalocal sudo[7365]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnmlwpavtsfsocipnpsbzljrzdeixqkr ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 09 09:46:33 np0005551604.novalocal sudo[7365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:46:33 np0005551604.novalocal python3[7367]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:46:33 np0005551604.novalocal sudo[7365]: pam_unix(sudo:session): session closed for user root
Dec 09 09:46:33 np0005551604.novalocal sudo[7438]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vowklayflgaffbloxbyzatfkibgqmqbd ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 09 09:46:33 np0005551604.novalocal sudo[7438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:46:33 np0005551604.novalocal python3[7440]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765273593.301405-259-42036110475273/source _original_basename=tmpmortlohk follow=False checksum=4ae1a859dd4000488bb89b035ed2aff6b8cccaf9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:46:33 np0005551604.novalocal sudo[7438]: pam_unix(sudo:session): session closed for user root
Dec 09 09:46:36 np0005551604.novalocal sshd-session[7289]: Connection closed by 38.102.83.114 port 60506
Dec 09 09:46:36 np0005551604.novalocal sshd-session[7286]: pam_unix(sshd:session): session closed for user zuul
Dec 09 09:46:36 np0005551604.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 09 09:46:36 np0005551604.novalocal systemd-logind[806]: Session 3 logged out. Waiting for processes to exit.
Dec 09 09:46:36 np0005551604.novalocal systemd-logind[806]: Removed session 3.
Dec 09 09:48:51 np0005551604.novalocal sshd-session[7466]: Invalid user ubuntu from 58.82.169.249 port 52354
Dec 09 09:48:51 np0005551604.novalocal sshd-session[7466]: Received disconnect from 58.82.169.249 port 52354:11:  [preauth]
Dec 09 09:48:51 np0005551604.novalocal sshd-session[7466]: Disconnected from invalid user ubuntu 58.82.169.249 port 52354 [preauth]
Dec 09 09:49:00 np0005551604.novalocal systemd[4301]: Created slice User Background Tasks Slice.
Dec 09 09:49:00 np0005551604.novalocal systemd[4301]: Starting Cleanup of User's Temporary Files and Directories...
Dec 09 09:49:00 np0005551604.novalocal systemd[4301]: Finished Cleanup of User's Temporary Files and Directories.
Dec 09 09:51:17 np0005551604.novalocal sshd-session[7471]: Accepted publickey for zuul from 38.102.83.114 port 41112 ssh2: RSA SHA256:OoA6ymXz1bGWu/N8aYc4tZBvI5ffrgdXcLpAm+SU/Q8
Dec 09 09:51:17 np0005551604.novalocal systemd-logind[806]: New session 4 of user zuul.
Dec 09 09:51:17 np0005551604.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 09 09:51:17 np0005551604.novalocal sshd-session[7471]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 09:51:18 np0005551604.novalocal sudo[7498]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmgbxgpvdcpcbvsktqzrzkcmewulxsyv ; /usr/bin/python3'
Dec 09 09:51:18 np0005551604.novalocal sudo[7498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:51:18 np0005551604.novalocal python3[7500]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-74ef-b9a8-000000001f15-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 09:51:18 np0005551604.novalocal sudo[7498]: pam_unix(sudo:session): session closed for user root
Dec 09 09:51:18 np0005551604.novalocal sudo[7527]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttapugprakmiupobeqxxlihiwnkxbfyn ; /usr/bin/python3'
Dec 09 09:51:18 np0005551604.novalocal sudo[7527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:51:18 np0005551604.novalocal python3[7529]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:51:18 np0005551604.novalocal sudo[7527]: pam_unix(sudo:session): session closed for user root
Dec 09 09:51:18 np0005551604.novalocal sudo[7553]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuiwfzjlxtaokrmvwxpbdafftijfyppm ; /usr/bin/python3'
Dec 09 09:51:18 np0005551604.novalocal sudo[7553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:51:18 np0005551604.novalocal python3[7555]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:51:18 np0005551604.novalocal sudo[7553]: pam_unix(sudo:session): session closed for user root
Dec 09 09:51:18 np0005551604.novalocal sudo[7579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocdckubcipkzxclevvodbcwdxvwkndmk ; /usr/bin/python3'
Dec 09 09:51:18 np0005551604.novalocal sudo[7579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:51:19 np0005551604.novalocal python3[7581]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:51:19 np0005551604.novalocal sudo[7579]: pam_unix(sudo:session): session closed for user root
Dec 09 09:51:19 np0005551604.novalocal sudo[7605]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krccwwlnrdskjfddnncmioiytjprznhm ; /usr/bin/python3'
Dec 09 09:51:19 np0005551604.novalocal sudo[7605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:51:19 np0005551604.novalocal python3[7607]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:51:19 np0005551604.novalocal sudo[7605]: pam_unix(sudo:session): session closed for user root
Dec 09 09:51:20 np0005551604.novalocal sudo[7631]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytesqbgozassqjbaljoruajaftblclir ; /usr/bin/python3'
Dec 09 09:51:20 np0005551604.novalocal sudo[7631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:51:20 np0005551604.novalocal python3[7633]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:51:20 np0005551604.novalocal sudo[7631]: pam_unix(sudo:session): session closed for user root
Dec 09 09:51:20 np0005551604.novalocal sudo[7709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msuvmmvsyelfagjknysqrkegkaycncvs ; /usr/bin/python3'
Dec 09 09:51:20 np0005551604.novalocal sudo[7709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:51:20 np0005551604.novalocal python3[7711]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:51:20 np0005551604.novalocal sudo[7709]: pam_unix(sudo:session): session closed for user root
Dec 09 09:51:21 np0005551604.novalocal sudo[7782]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvqmwirfhbynpyhrlqlzwizyphcwbzfp ; /usr/bin/python3'
Dec 09 09:51:21 np0005551604.novalocal sudo[7782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:51:21 np0005551604.novalocal python3[7784]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765273880.6044233-490-85188014592243/source _original_basename=tmpw3nhvz2a follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:51:21 np0005551604.novalocal sudo[7782]: pam_unix(sudo:session): session closed for user root
Dec 09 09:51:22 np0005551604.novalocal sudo[7832]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjcurkocdozihasothxoiypgzncdbqzv ; /usr/bin/python3'
Dec 09 09:51:22 np0005551604.novalocal sudo[7832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:51:22 np0005551604.novalocal python3[7834]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 09:51:22 np0005551604.novalocal systemd[1]: Reloading.
Dec 09 09:51:22 np0005551604.novalocal systemd-rc-local-generator[7857]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 09:51:22 np0005551604.novalocal sudo[7832]: pam_unix(sudo:session): session closed for user root
Dec 09 09:51:24 np0005551604.novalocal sudo[7888]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icrdbcffpohuuingoncvldhhzpkfeopo ; /usr/bin/python3'
Dec 09 09:51:24 np0005551604.novalocal sudo[7888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:51:24 np0005551604.novalocal python3[7890]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 09 09:51:24 np0005551604.novalocal sudo[7888]: pam_unix(sudo:session): session closed for user root
Dec 09 09:51:24 np0005551604.novalocal sudo[7914]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqwghdrveilfbhmeuangykaogtnwevsh ; /usr/bin/python3'
Dec 09 09:51:24 np0005551604.novalocal sudo[7914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:51:24 np0005551604.novalocal python3[7916]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 09:51:24 np0005551604.novalocal sudo[7914]: pam_unix(sudo:session): session closed for user root
Dec 09 09:51:24 np0005551604.novalocal sudo[7942]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcbjurljygeztixxkiyxuvenvxfmlqhh ; /usr/bin/python3'
Dec 09 09:51:24 np0005551604.novalocal sudo[7942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:51:24 np0005551604.novalocal python3[7944]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 09:51:24 np0005551604.novalocal sudo[7942]: pam_unix(sudo:session): session closed for user root
Dec 09 09:51:25 np0005551604.novalocal sudo[7970]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaeppaxbinyddyjoqgwbcebewgoywxqy ; /usr/bin/python3'
Dec 09 09:51:25 np0005551604.novalocal sudo[7970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:51:25 np0005551604.novalocal python3[7972]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 09:51:25 np0005551604.novalocal sudo[7970]: pam_unix(sudo:session): session closed for user root
Dec 09 09:51:25 np0005551604.novalocal sudo[7998]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcdxyrwrwqdowkrfbeiyeqoxynjxpcwt ; /usr/bin/python3'
Dec 09 09:51:25 np0005551604.novalocal sudo[7998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:51:25 np0005551604.novalocal python3[8000]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 09:51:25 np0005551604.novalocal sudo[7998]: pam_unix(sudo:session): session closed for user root
Dec 09 09:51:26 np0005551604.novalocal python3[8027]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-74ef-b9a8-000000001f1c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 09:51:26 np0005551604.novalocal python3[8057]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 09:51:28 np0005551604.novalocal sshd-session[7474]: Connection closed by 38.102.83.114 port 41112
Dec 09 09:51:28 np0005551604.novalocal sshd-session[7471]: pam_unix(sshd:session): session closed for user zuul
Dec 09 09:51:28 np0005551604.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 09 09:51:28 np0005551604.novalocal systemd[1]: session-4.scope: Consumed 4.092s CPU time.
Dec 09 09:51:28 np0005551604.novalocal systemd-logind[806]: Session 4 logged out. Waiting for processes to exit.
Dec 09 09:51:28 np0005551604.novalocal systemd-logind[806]: Removed session 4.
Dec 09 09:51:30 np0005551604.novalocal sshd-session[8061]: Accepted publickey for zuul from 38.102.83.114 port 38902 ssh2: RSA SHA256:OoA6ymXz1bGWu/N8aYc4tZBvI5ffrgdXcLpAm+SU/Q8
Dec 09 09:51:30 np0005551604.novalocal systemd-logind[806]: New session 5 of user zuul.
Dec 09 09:51:30 np0005551604.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 09 09:51:30 np0005551604.novalocal sshd-session[8061]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 09:51:30 np0005551604.novalocal sudo[8088]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsktzmomvmtlzhexpgaghbckyrqvqihl ; /usr/bin/python3'
Dec 09 09:51:30 np0005551604.novalocal sudo[8088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:51:30 np0005551604.novalocal python3[8090]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 09 09:51:46 np0005551604.novalocal kernel: SELinux:  Converting 383 SID table entries...
Dec 09 09:51:46 np0005551604.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 09:51:46 np0005551604.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 09 09:51:46 np0005551604.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 09:51:46 np0005551604.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 09 09:51:46 np0005551604.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 09:51:46 np0005551604.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 09:51:46 np0005551604.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 09:51:55 np0005551604.novalocal kernel: SELinux:  Converting 383 SID table entries...
Dec 09 09:51:55 np0005551604.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 09:51:55 np0005551604.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 09 09:51:55 np0005551604.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 09:51:55 np0005551604.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 09 09:51:55 np0005551604.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 09:51:55 np0005551604.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 09:51:55 np0005551604.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 09:52:04 np0005551604.novalocal kernel: SELinux:  Converting 383 SID table entries...
Dec 09 09:52:04 np0005551604.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 09:52:04 np0005551604.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 09 09:52:04 np0005551604.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 09:52:04 np0005551604.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 09 09:52:04 np0005551604.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 09:52:04 np0005551604.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 09:52:04 np0005551604.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 09:52:06 np0005551604.novalocal setsebool[8156]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 09 09:52:06 np0005551604.novalocal setsebool[8156]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 09 09:52:17 np0005551604.novalocal kernel: SELinux:  Converting 386 SID table entries...
Dec 09 09:52:17 np0005551604.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 09:52:17 np0005551604.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 09 09:52:17 np0005551604.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 09:52:17 np0005551604.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 09 09:52:17 np0005551604.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 09:52:17 np0005551604.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 09:52:17 np0005551604.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 09:52:36 np0005551604.novalocal dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 09 09:52:36 np0005551604.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 09:52:36 np0005551604.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 09 09:52:36 np0005551604.novalocal systemd[1]: Reloading.
Dec 09 09:52:36 np0005551604.novalocal systemd-rc-local-generator[8912]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 09:52:36 np0005551604.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 09:52:37 np0005551604.novalocal sudo[8088]: pam_unix(sudo:session): session closed for user root
Dec 09 09:52:51 np0005551604.novalocal python3[16973]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-8f0d-a351-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 09:52:52 np0005551604.novalocal kernel: evm: overlay not supported
Dec 09 09:52:52 np0005551604.novalocal systemd[4301]: Starting D-Bus User Message Bus...
Dec 09 09:52:52 np0005551604.novalocal dbus-broker-launch[17476]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 09 09:52:52 np0005551604.novalocal dbus-broker-launch[17476]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 09 09:52:52 np0005551604.novalocal systemd[4301]: Started D-Bus User Message Bus.
Dec 09 09:52:52 np0005551604.novalocal dbus-broker-lau[17476]: Ready
Dec 09 09:52:52 np0005551604.novalocal systemd[4301]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 09 09:52:52 np0005551604.novalocal systemd[4301]: Created slice Slice /user.
Dec 09 09:52:52 np0005551604.novalocal systemd[4301]: podman-17409.scope: unit configures an IP firewall, but not running as root.
Dec 09 09:52:52 np0005551604.novalocal systemd[4301]: (This warning is only shown for the first unit using IP firewalling.)
Dec 09 09:52:52 np0005551604.novalocal systemd[4301]: Started podman-17409.scope.
Dec 09 09:52:52 np0005551604.novalocal systemd[4301]: Started podman-pause-1ca8aea7.scope.
Dec 09 09:52:53 np0005551604.novalocal sudo[17959]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvmpnkiblhmvnydyjpmpftdeyquwztic ; /usr/bin/python3'
Dec 09 09:52:53 np0005551604.novalocal sudo[17959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:52:53 np0005551604.novalocal python3[17970]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.20:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.20:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:52:53 np0005551604.novalocal python3[17970]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 09 09:52:53 np0005551604.novalocal sudo[17959]: pam_unix(sudo:session): session closed for user root
Dec 09 09:52:54 np0005551604.novalocal sshd-session[8064]: Connection closed by 38.102.83.114 port 38902
Dec 09 09:52:54 np0005551604.novalocal sshd-session[8061]: pam_unix(sshd:session): session closed for user zuul
Dec 09 09:52:54 np0005551604.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 09 09:52:54 np0005551604.novalocal systemd[1]: session-5.scope: Consumed 1min 3.547s CPU time.
Dec 09 09:52:54 np0005551604.novalocal systemd-logind[806]: Session 5 logged out. Waiting for processes to exit.
Dec 09 09:52:54 np0005551604.novalocal systemd-logind[806]: Removed session 5.
Dec 09 09:53:17 np0005551604.novalocal sshd-session[27035]: Connection closed by 38.102.83.145 port 59278 [preauth]
Dec 09 09:53:17 np0005551604.novalocal sshd-session[27039]: Unable to negotiate with 38.102.83.145 port 59304: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 09 09:53:17 np0005551604.novalocal sshd-session[27033]: Connection closed by 38.102.83.145 port 59270 [preauth]
Dec 09 09:53:17 np0005551604.novalocal sshd-session[27036]: Unable to negotiate with 38.102.83.145 port 59292: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 09 09:53:17 np0005551604.novalocal sshd-session[27037]: Unable to negotiate with 38.102.83.145 port 59312: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 09 09:53:21 np0005551604.novalocal sshd-session[28675]: Accepted publickey for zuul from 38.102.83.114 port 50380 ssh2: RSA SHA256:OoA6ymXz1bGWu/N8aYc4tZBvI5ffrgdXcLpAm+SU/Q8
Dec 09 09:53:21 np0005551604.novalocal systemd-logind[806]: New session 6 of user zuul.
Dec 09 09:53:21 np0005551604.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 09 09:53:21 np0005551604.novalocal sshd-session[28675]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 09:53:22 np0005551604.novalocal python3[28780]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ378N+Oz5m8eLVC8HHlrxMbp1qIUGFsk3C6HoxKm6dcQaGp03ZHLJaCYgcfGkRl7+5RL+g4qxcj1Em4fs9vNXY= zuul@np0005551603.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:53:22 np0005551604.novalocal sudo[28943]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xftxocmvdbtmhwpmkcziogckrztqzuto ; /usr/bin/python3'
Dec 09 09:53:22 np0005551604.novalocal sudo[28943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:53:22 np0005551604.novalocal python3[28957]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ378N+Oz5m8eLVC8HHlrxMbp1qIUGFsk3C6HoxKm6dcQaGp03ZHLJaCYgcfGkRl7+5RL+g4qxcj1Em4fs9vNXY= zuul@np0005551603.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:53:22 np0005551604.novalocal sudo[28943]: pam_unix(sudo:session): session closed for user root
Dec 09 09:53:23 np0005551604.novalocal sudo[29324]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxvqpchojocqeghuuqhdriqynvxsddls ; /usr/bin/python3'
Dec 09 09:53:23 np0005551604.novalocal sudo[29324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:53:23 np0005551604.novalocal python3[29334]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005551604.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 09 09:53:23 np0005551604.novalocal useradd[29407]: new group: name=cloud-admin, GID=1002
Dec 09 09:53:23 np0005551604.novalocal useradd[29407]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 09 09:53:23 np0005551604.novalocal sudo[29324]: pam_unix(sudo:session): session closed for user root
Dec 09 09:53:23 np0005551604.novalocal sudo[29524]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbhygztnelzdjyfqvrrkhbwrdmeoebdo ; /usr/bin/python3'
Dec 09 09:53:23 np0005551604.novalocal sudo[29524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:53:23 np0005551604.novalocal python3[29534]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ378N+Oz5m8eLVC8HHlrxMbp1qIUGFsk3C6HoxKm6dcQaGp03ZHLJaCYgcfGkRl7+5RL+g4qxcj1Em4fs9vNXY= zuul@np0005551603.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 09 09:53:23 np0005551604.novalocal sudo[29524]: pam_unix(sudo:session): session closed for user root
Dec 09 09:53:24 np0005551604.novalocal sudo[29781]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snptxnvcqyjgpkzikeivbpvcampaerec ; /usr/bin/python3'
Dec 09 09:53:24 np0005551604.novalocal sudo[29781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:53:24 np0005551604.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 09:53:24 np0005551604.novalocal systemd[1]: Finished man-db-cache-update.service.
Dec 09 09:53:24 np0005551604.novalocal systemd[1]: man-db-cache-update.service: Consumed 55.825s CPU time.
Dec 09 09:53:24 np0005551604.novalocal systemd[1]: run-r86525ca06c184a608d590d71624be2a0.service: Deactivated successfully.
Dec 09 09:53:24 np0005551604.novalocal python3[29790]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:53:24 np0005551604.novalocal sudo[29781]: pam_unix(sudo:session): session closed for user root
Dec 09 09:53:24 np0005551604.novalocal sudo[29876]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxinieczqnqxfdxwafcbrlbrvncevicv ; /usr/bin/python3'
Dec 09 09:53:24 np0005551604.novalocal sudo[29876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:53:24 np0005551604.novalocal python3[29878]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765274003.9499478-135-44035750658726/source _original_basename=tmpmntqbjhy follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:53:24 np0005551604.novalocal sudo[29876]: pam_unix(sudo:session): session closed for user root
Dec 09 09:53:25 np0005551604.novalocal sudo[29926]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifokvbielfryeutwhmnroqthyvydkskw ; /usr/bin/python3'
Dec 09 09:53:25 np0005551604.novalocal sudo[29926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:53:25 np0005551604.novalocal python3[29928]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec 09 09:53:25 np0005551604.novalocal systemd[1]: Starting Hostname Service...
Dec 09 09:53:25 np0005551604.novalocal systemd[1]: Started Hostname Service.
Dec 09 09:53:25 np0005551604.novalocal systemd-hostnamed[29932]: Changed pretty hostname to 'compute-0'
Dec 09 09:53:25 compute-0 systemd-hostnamed[29932]: Hostname set to <compute-0> (static)
Dec 09 09:53:25 compute-0 NetworkManager[7184]: <info>  [1765274005.7394] hostname: static hostname changed from "np0005551604.novalocal" to "compute-0"
Dec 09 09:53:25 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 09:53:25 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 09:53:25 compute-0 sudo[29926]: pam_unix(sudo:session): session closed for user root
Dec 09 09:53:26 compute-0 sshd-session[28728]: Connection closed by 38.102.83.114 port 50380
Dec 09 09:53:26 compute-0 sshd-session[28675]: pam_unix(sshd:session): session closed for user zuul
Dec 09 09:53:26 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Dec 09 09:53:26 compute-0 systemd[1]: session-6.scope: Consumed 2.415s CPU time.
Dec 09 09:53:26 compute-0 systemd-logind[806]: Session 6 logged out. Waiting for processes to exit.
Dec 09 09:53:26 compute-0 systemd-logind[806]: Removed session 6.
Dec 09 09:53:35 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 09:53:55 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 09 09:58:00 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 09 09:58:00 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 09 09:58:00 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 09 09:58:00 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 09 09:58:30 compute-0 sshd-session[29954]: Accepted publickey for zuul from 38.102.83.145 port 53786 ssh2: RSA SHA256:OoA6ymXz1bGWu/N8aYc4tZBvI5ffrgdXcLpAm+SU/Q8
Dec 09 09:58:30 compute-0 systemd-logind[806]: New session 7 of user zuul.
Dec 09 09:58:30 compute-0 systemd[1]: Started Session 7 of User zuul.
Dec 09 09:58:30 compute-0 sshd-session[29954]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 09:58:30 compute-0 python3[30030]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 09:58:32 compute-0 sshd-session[30069]: Invalid user yhtcAdmin from 45.148.10.121 port 57468
Dec 09 09:58:32 compute-0 sshd-session[30069]: Connection closed by invalid user yhtcAdmin 45.148.10.121 port 57468 [preauth]
Dec 09 09:58:32 compute-0 sudo[30146]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irkjmjcxtcsgvyxqmsoxljvktkomrbiu ; /usr/bin/python3'
Dec 09 09:58:32 compute-0 sudo[30146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:58:32 compute-0 python3[30148]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:58:32 compute-0 sudo[30146]: pam_unix(sudo:session): session closed for user root
Dec 09 09:58:33 compute-0 sudo[30219]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leernofcbmlvdmxabmenlifurqdvrsat ; /usr/bin/python3'
Dec 09 09:58:33 compute-0 sudo[30219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:58:33 compute-0 python3[30221]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765274312.2476404-33644-235297562298365/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:58:33 compute-0 sudo[30219]: pam_unix(sudo:session): session closed for user root
Dec 09 09:58:33 compute-0 sudo[30245]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyrjzwaadbgnbxmixaxtrafswjgnvysf ; /usr/bin/python3'
Dec 09 09:58:33 compute-0 sudo[30245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:58:33 compute-0 python3[30247]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:58:33 compute-0 sudo[30245]: pam_unix(sudo:session): session closed for user root
Dec 09 09:58:33 compute-0 sudo[30318]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajbubpdmzjywpgrexbqwudlescozrbmk ; /usr/bin/python3'
Dec 09 09:58:33 compute-0 sudo[30318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:58:33 compute-0 python3[30320]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765274312.2476404-33644-235297562298365/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:58:33 compute-0 sudo[30318]: pam_unix(sudo:session): session closed for user root
Dec 09 09:58:33 compute-0 sudo[30344]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eukeygcbgwgtkzkagcmcucmptjuhixfe ; /usr/bin/python3'
Dec 09 09:58:33 compute-0 sudo[30344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:58:34 compute-0 python3[30346]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:58:34 compute-0 sudo[30344]: pam_unix(sudo:session): session closed for user root
Dec 09 09:58:34 compute-0 sudo[30417]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpbdlrrmjuejwgvuzdwlxlfyqhkqagpc ; /usr/bin/python3'
Dec 09 09:58:34 compute-0 sudo[30417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:58:34 compute-0 python3[30419]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765274312.2476404-33644-235297562298365/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:58:34 compute-0 sudo[30417]: pam_unix(sudo:session): session closed for user root
Dec 09 09:58:34 compute-0 sudo[30443]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqhlptbsqngytkvcwhvzctfounizurxp ; /usr/bin/python3'
Dec 09 09:58:34 compute-0 sudo[30443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:58:34 compute-0 python3[30445]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:58:34 compute-0 sudo[30443]: pam_unix(sudo:session): session closed for user root
Dec 09 09:58:35 compute-0 sudo[30516]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otzlrkbfxumhzprkurwsxhkcbksqkdcf ; /usr/bin/python3'
Dec 09 09:58:35 compute-0 sudo[30516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:58:35 compute-0 python3[30518]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765274312.2476404-33644-235297562298365/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:58:35 compute-0 sudo[30516]: pam_unix(sudo:session): session closed for user root
Dec 09 09:58:35 compute-0 sudo[30542]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucuzklrhoyyhskhdvanllicxrejwxfng ; /usr/bin/python3'
Dec 09 09:58:35 compute-0 sudo[30542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:58:35 compute-0 python3[30544]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:58:35 compute-0 sudo[30542]: pam_unix(sudo:session): session closed for user root
Dec 09 09:58:35 compute-0 sudo[30615]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkpxvxfokujzuzkvklogerptowcythho ; /usr/bin/python3'
Dec 09 09:58:35 compute-0 sudo[30615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:58:35 compute-0 python3[30617]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765274312.2476404-33644-235297562298365/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:58:35 compute-0 sudo[30615]: pam_unix(sudo:session): session closed for user root
Dec 09 09:58:35 compute-0 sudo[30641]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gksaaqjiipeovdrhnzbpvswttgyanqrv ; /usr/bin/python3'
Dec 09 09:58:35 compute-0 sudo[30641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:58:36 compute-0 python3[30643]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:58:36 compute-0 sudo[30641]: pam_unix(sudo:session): session closed for user root
Dec 09 09:58:36 compute-0 sudo[30714]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmglhnenwzxgcdrdpeovjtwbmfrcxhve ; /usr/bin/python3'
Dec 09 09:58:36 compute-0 sudo[30714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:58:36 compute-0 python3[30716]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765274312.2476404-33644-235297562298365/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:58:36 compute-0 sudo[30714]: pam_unix(sudo:session): session closed for user root
Dec 09 09:58:36 compute-0 sudo[30740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubgvnnxbeswhukxnbhanfpctpugjqyzx ; /usr/bin/python3'
Dec 09 09:58:36 compute-0 sudo[30740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:58:36 compute-0 python3[30742]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 09 09:58:36 compute-0 sudo[30740]: pam_unix(sudo:session): session closed for user root
Dec 09 09:58:36 compute-0 sudo[30813]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thcbiosaxqixkpvxmvghyhynrlzgwjht ; /usr/bin/python3'
Dec 09 09:58:36 compute-0 sudo[30813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 09:58:37 compute-0 python3[30815]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765274312.2476404-33644-235297562298365/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 09:58:37 compute-0 sudo[30813]: pam_unix(sudo:session): session closed for user root
Dec 09 09:58:40 compute-0 sshd-session[30842]: Unable to negotiate with 192.168.122.11 port 54436: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 09 09:58:40 compute-0 sshd-session[30844]: Connection closed by 192.168.122.11 port 54420 [preauth]
Dec 09 09:58:40 compute-0 sshd-session[30845]: Connection closed by 192.168.122.11 port 54422 [preauth]
Dec 09 09:58:40 compute-0 sshd-session[30843]: Unable to negotiate with 192.168.122.11 port 54428: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 09 09:58:40 compute-0 sshd-session[30841]: Unable to negotiate with 192.168.122.11 port 54446: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 09 09:58:40 compute-0 sshd-session[30840]: banner exchange: Connection from 104.41.137.249 port 38626: invalid format
Dec 09 10:01:01 compute-0 CROND[30853]: (root) CMD (run-parts /etc/cron.hourly)
Dec 09 10:01:01 compute-0 run-parts[30856]: (/etc/cron.hourly) starting 0anacron
Dec 09 10:01:01 compute-0 anacron[30864]: Anacron started on 2025-12-09
Dec 09 10:01:01 compute-0 anacron[30864]: Will run job `cron.daily' in 50 min.
Dec 09 10:01:01 compute-0 anacron[30864]: Will run job `cron.weekly' in 70 min.
Dec 09 10:01:01 compute-0 anacron[30864]: Will run job `cron.monthly' in 90 min.
Dec 09 10:01:01 compute-0 anacron[30864]: Jobs will be executed sequentially
Dec 09 10:01:01 compute-0 run-parts[30866]: (/etc/cron.hourly) finished 0anacron
Dec 09 10:01:01 compute-0 CROND[30852]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 09 10:02:03 compute-0 python3[30890]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:07:03 compute-0 sshd-session[29957]: Received disconnect from 38.102.83.145 port 53786:11: disconnected by user
Dec 09 10:07:03 compute-0 sshd-session[29957]: Disconnected from user zuul 38.102.83.145 port 53786
Dec 09 10:07:03 compute-0 sshd-session[29954]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:07:03 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Dec 09 10:07:03 compute-0 systemd[1]: session-7.scope: Consumed 5.352s CPU time.
Dec 09 10:07:03 compute-0 systemd-logind[806]: Session 7 logged out. Waiting for processes to exit.
Dec 09 10:07:03 compute-0 systemd-logind[806]: Removed session 7.
Dec 09 10:09:17 compute-0 sshd-session[30895]: Received disconnect from 193.46.255.7 port 30506:11:  [preauth]
Dec 09 10:09:17 compute-0 sshd-session[30895]: Disconnected from authenticating user root 193.46.255.7 port 30506 [preauth]
Dec 09 10:11:08 compute-0 sshd-session[30897]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Dec 09 10:11:18 compute-0 sshd-session[30897]: Connection closed by authenticating user root 139.19.117.197 port 53152 [preauth]
Dec 09 10:12:50 compute-0 sshd-session[30900]: Received disconnect from 193.46.255.33 port 19586:11:  [preauth]
Dec 09 10:12:50 compute-0 sshd-session[30900]: Disconnected from authenticating user root 193.46.255.33 port 19586 [preauth]
Dec 09 10:17:55 compute-0 sshd-session[30905]: Accepted publickey for zuul from 192.168.122.30 port 46784 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:17:55 compute-0 systemd-logind[806]: New session 8 of user zuul.
Dec 09 10:17:55 compute-0 systemd[1]: Started Session 8 of User zuul.
Dec 09 10:17:55 compute-0 sshd-session[30905]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:17:56 compute-0 python3.9[31058]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:17:57 compute-0 sudo[31237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdtkfapscrflhjuoirzgfpccxhevkhzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275477.126611-32-258919812346532/AnsiballZ_command.py'
Dec 09 10:17:57 compute-0 sudo[31237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:17:57 compute-0 python3.9[31239]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:18:05 compute-0 sudo[31237]: pam_unix(sudo:session): session closed for user root
Dec 09 10:18:05 compute-0 sshd-session[30908]: Connection closed by 192.168.122.30 port 46784
Dec 09 10:18:05 compute-0 sshd-session[30905]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:18:05 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Dec 09 10:18:05 compute-0 systemd[1]: session-8.scope: Consumed 8.334s CPU time.
Dec 09 10:18:05 compute-0 systemd-logind[806]: Session 8 logged out. Waiting for processes to exit.
Dec 09 10:18:05 compute-0 systemd-logind[806]: Removed session 8.
Dec 09 10:18:25 compute-0 sshd-session[31297]: Accepted publickey for zuul from 192.168.122.30 port 42686 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:18:25 compute-0 systemd-logind[806]: New session 9 of user zuul.
Dec 09 10:18:25 compute-0 systemd[1]: Started Session 9 of User zuul.
Dec 09 10:18:25 compute-0 sshd-session[31297]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:18:27 compute-0 python3.9[31450]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:18:27 compute-0 sshd-session[31300]: Connection closed by 192.168.122.30 port 42686
Dec 09 10:18:27 compute-0 sshd-session[31297]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:18:27 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Dec 09 10:18:27 compute-0 systemd-logind[806]: Session 9 logged out. Waiting for processes to exit.
Dec 09 10:18:27 compute-0 systemd-logind[806]: Removed session 9.
Dec 09 10:18:43 compute-0 sshd-session[31478]: Accepted publickey for zuul from 192.168.122.30 port 60594 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:18:43 compute-0 systemd-logind[806]: New session 10 of user zuul.
Dec 09 10:18:43 compute-0 systemd[1]: Started Session 10 of User zuul.
Dec 09 10:18:43 compute-0 sshd-session[31478]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:18:44 compute-0 python3.9[31631]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 09 10:18:46 compute-0 python3.9[31805]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:18:46 compute-0 sudo[31955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzhxmeclhptilurujbgmjhjzosnksjdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275526.476389-45-238390364703698/AnsiballZ_command.py'
Dec 09 10:18:46 compute-0 sudo[31955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:18:47 compute-0 python3.9[31957]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:18:47 compute-0 sudo[31955]: pam_unix(sudo:session): session closed for user root
Dec 09 10:18:47 compute-0 sudo[32108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilkiymsqgvtlvhingnwyhqwtbtgxofbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275527.4230003-57-127409916435560/AnsiballZ_stat.py'
Dec 09 10:18:47 compute-0 sudo[32108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:18:48 compute-0 python3.9[32110]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:18:48 compute-0 sudo[32108]: pam_unix(sudo:session): session closed for user root
Dec 09 10:18:49 compute-0 sudo[32260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igkufedgbmdxxmrhzvdgwczaverdknsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275528.6943157-65-220436644247999/AnsiballZ_file.py'
Dec 09 10:18:49 compute-0 sudo[32260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:18:49 compute-0 python3.9[32262]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:18:49 compute-0 sudo[32260]: pam_unix(sudo:session): session closed for user root
Dec 09 10:18:49 compute-0 sudo[32412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzhbegdmnyxkpitrcdtecdqslguqesga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275529.664212-73-46966097715200/AnsiballZ_stat.py'
Dec 09 10:18:49 compute-0 sudo[32412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:18:50 compute-0 python3.9[32414]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:18:50 compute-0 sudo[32412]: pam_unix(sudo:session): session closed for user root
Dec 09 10:18:50 compute-0 sudo[32535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeulqunzjrmtuioiubpgcsbmbvchtbkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275529.664212-73-46966097715200/AnsiballZ_copy.py'
Dec 09 10:18:50 compute-0 sudo[32535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:18:51 compute-0 python3.9[32537]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765275529.664212-73-46966097715200/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:18:51 compute-0 sudo[32535]: pam_unix(sudo:session): session closed for user root
Dec 09 10:18:51 compute-0 sudo[32687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbiapqhadaqwtfkjvqduxzikdvbmbxmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275531.2273629-88-161958467403074/AnsiballZ_setup.py'
Dec 09 10:18:51 compute-0 sudo[32687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:18:51 compute-0 python3.9[32689]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:18:52 compute-0 sudo[32687]: pam_unix(sudo:session): session closed for user root
Dec 09 10:18:52 compute-0 sudo[32843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgsgzvcttbttqvyzkprcggmdkwonfskb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275532.394301-96-86147006526653/AnsiballZ_file.py'
Dec 09 10:18:52 compute-0 sudo[32843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:18:53 compute-0 python3.9[32845]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:18:53 compute-0 sudo[32843]: pam_unix(sudo:session): session closed for user root
Dec 09 10:18:53 compute-0 sudo[32995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hysvfahmtzjjaiztwwvtvqzurvkvzwaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275533.590106-105-19756600429031/AnsiballZ_file.py'
Dec 09 10:18:53 compute-0 sudo[32995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:18:54 compute-0 python3.9[32997]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:18:54 compute-0 sudo[32995]: pam_unix(sudo:session): session closed for user root
Dec 09 10:18:55 compute-0 python3.9[33147]: ansible-ansible.builtin.service_facts Invoked
Dec 09 10:19:00 compute-0 python3.9[33400]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:19:01 compute-0 python3.9[33550]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:19:02 compute-0 python3.9[33704]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:19:03 compute-0 sudo[33860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbzmqmozvegshjvqyedtverspmenquep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275543.1624708-153-122938134525195/AnsiballZ_setup.py'
Dec 09 10:19:03 compute-0 sudo[33860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:19:03 compute-0 python3.9[33862]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 10:19:04 compute-0 sudo[33860]: pam_unix(sudo:session): session closed for user root
Dec 09 10:19:04 compute-0 sudo[33944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grhvgtntvbsoznstemusqymxqyxponxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275543.1624708-153-122938134525195/AnsiballZ_dnf.py'
Dec 09 10:19:04 compute-0 sudo[33944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:19:05 compute-0 python3.9[33946]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 10:19:50 compute-0 systemd[1]: Reloading.
Dec 09 10:19:50 compute-0 systemd-rc-local-generator[34144]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:19:50 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 09 10:19:50 compute-0 systemd[1]: Reloading.
Dec 09 10:19:50 compute-0 systemd-rc-local-generator[34183]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:19:50 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 09 10:19:51 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 09 10:19:51 compute-0 systemd[1]: Reloading.
Dec 09 10:19:51 compute-0 systemd-rc-local-generator[34222]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:19:51 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 09 10:19:51 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Dec 09 10:19:51 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Dec 09 10:19:51 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Dec 09 10:21:02 compute-0 kernel: SELinux:  Converting 2716 SID table entries...
Dec 09 10:21:02 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 10:21:02 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 09 10:21:02 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 10:21:02 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 09 10:21:02 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 10:21:02 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 10:21:02 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 10:21:02 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 09 10:21:03 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 10:21:03 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 10:21:03 compute-0 systemd[1]: Reloading.
Dec 09 10:21:03 compute-0 systemd-rc-local-generator[34546]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:21:03 compute-0 systemd[1]: Starting dnf makecache...
Dec 09 10:21:03 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 10:21:03 compute-0 dnf[34583]: Failed determining last makecache time.
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-openstack-barbican-42b4c41831408a8e323 141 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 211 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-openstack-cinder-1c00d6490d88e436f26ef 209 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-python-stevedore-c4acc5639fd2329372142 196 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-python-cloudkitty-tests-tempest-2c80f8 154 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-os-refresh-config-9bfc52b5049be2d8de61 221 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 151 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-python-designate-tests-tempest-347fdbc 169 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-openstack-glance-1fd12c29b339f30fe823e 193 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 195 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-openstack-manila-3c01b7181572c95dac462 199 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 sudo[33944]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-python-whitebox-neutron-tests-tempest- 159 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-openstack-octavia-ba397f07a7331190208c 160 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-openstack-watcher-c014f81a8647287f6dcc 155 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-ansible-config_template-5ccaa22121a7ff 163 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 173 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-openstack-swift-dc98a8463506ac520c469a 167 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-python-tempestconf-8515371b7cceebd4282 168 kB/s | 3.0 kB     00:00
Dec 09 10:21:03 compute-0 dnf[34583]: delorean-openstack-heat-ui-013accbfd179753bc3f0 207 kB/s | 3.0 kB     00:00
Dec 09 10:21:04 compute-0 dnf[34583]: CentOS Stream 9 - BaseOS                         46 kB/s | 5.4 kB     00:00
Dec 09 10:21:04 compute-0 dnf[34583]: CentOS Stream 9 - AppStream                      64 kB/s | 5.8 kB     00:00
Dec 09 10:21:04 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 10:21:04 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 10:21:04 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.315s CPU time.
Dec 09 10:21:04 compute-0 systemd[1]: run-r514b4e584f7f4bd79cacaa965025fb03.service: Deactivated successfully.
Dec 09 10:21:04 compute-0 sudo[35483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnxtvdwgxzbterdpnmiodbezorjbkbvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275664.1797557-165-9284049485177/AnsiballZ_command.py'
Dec 09 10:21:04 compute-0 sudo[35483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:04 compute-0 dnf[34583]: CentOS Stream 9 - CRB                            35 kB/s | 5.3 kB     00:00
Dec 09 10:21:04 compute-0 python3.9[35485]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:21:04 compute-0 dnf[34583]: CentOS Stream 9 - Extras packages                29 kB/s | 8.3 kB     00:00
Dec 09 10:21:04 compute-0 dnf[34583]: dlrn-antelope-testing                           193 kB/s | 3.0 kB     00:00
Dec 09 10:21:04 compute-0 dnf[34583]: dlrn-antelope-build-deps                        117 kB/s | 3.0 kB     00:00
Dec 09 10:21:04 compute-0 dnf[34583]: centos9-rabbitmq                                101 kB/s | 3.0 kB     00:00
Dec 09 10:21:04 compute-0 dnf[34583]: centos9-storage                                 109 kB/s | 3.0 kB     00:00
Dec 09 10:21:05 compute-0 dnf[34583]: centos9-opstools                                121 kB/s | 3.0 kB     00:00
Dec 09 10:21:05 compute-0 dnf[34583]: NFV SIG OpenvSwitch                             116 kB/s | 3.0 kB     00:00
Dec 09 10:21:05 compute-0 dnf[34583]: repo-setup-centos-appstream                     105 kB/s | 4.4 kB     00:00
Dec 09 10:21:05 compute-0 dnf[34583]: repo-setup-centos-baseos                        192 kB/s | 3.9 kB     00:00
Dec 09 10:21:05 compute-0 dnf[34583]: repo-setup-centos-highavailability              111 kB/s | 3.9 kB     00:00
Dec 09 10:21:05 compute-0 dnf[34583]: repo-setup-centos-powertools                    212 kB/s | 4.3 kB     00:00
Dec 09 10:21:05 compute-0 dnf[34583]: Extra Packages for Enterprise Linux 9 - x86_64  162 kB/s |  28 kB     00:00
Dec 09 10:21:05 compute-0 sudo[35483]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:06 compute-0 dnf[34583]: Metadata cache created.
Dec 09 10:21:06 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 09 10:21:06 compute-0 systemd[1]: Finished dnf makecache.
Dec 09 10:21:06 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.797s CPU time.
Dec 09 10:21:07 compute-0 sudo[35785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oezeqpmpxholmgbesxripezokuidfpas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275666.14663-173-192954740203387/AnsiballZ_selinux.py'
Dec 09 10:21:07 compute-0 sudo[35785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:07 compute-0 python3.9[35787]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 09 10:21:07 compute-0 sudo[35785]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:07 compute-0 sudo[35937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdbrngkwgpknohytkbxkjyvcednbmehb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275667.6809974-184-211210865235867/AnsiballZ_command.py'
Dec 09 10:21:07 compute-0 sudo[35937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:08 compute-0 python3.9[35939]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 09 10:21:09 compute-0 sudo[35937]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:09 compute-0 sudo[36091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pheyzqrpgezaqtexdrnvogbjnwfoormf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275669.4055932-192-168831218881216/AnsiballZ_file.py'
Dec 09 10:21:09 compute-0 sudo[36091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:11 compute-0 python3.9[36093]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:21:11 compute-0 sudo[36091]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:12 compute-0 sudo[36243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfqkdhkdszaayofecrsjjespiondcuzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275671.6897054-200-280319910834743/AnsiballZ_mount.py'
Dec 09 10:21:12 compute-0 sudo[36243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:12 compute-0 python3.9[36245]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 09 10:21:12 compute-0 sudo[36243]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:13 compute-0 sudo[36395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trpgpqmdpjrsjeiucnlpmopheeruavyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275673.235845-228-273050060227911/AnsiballZ_file.py'
Dec 09 10:21:13 compute-0 sudo[36395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:14 compute-0 python3.9[36397]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:21:14 compute-0 sudo[36395]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:14 compute-0 sudo[36547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqweavziqjffosaoxpmmstylvycsuinn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275674.48044-236-40635590411434/AnsiballZ_stat.py'
Dec 09 10:21:14 compute-0 sudo[36547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:17 compute-0 python3.9[36549]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:21:17 compute-0 sudo[36547]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:18 compute-0 sudo[36670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrfliugmxejhmhlhbaoqaoevlqdiumae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275674.48044-236-40635590411434/AnsiballZ_copy.py'
Dec 09 10:21:18 compute-0 sudo[36670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:19 compute-0 python3.9[36672]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765275674.48044-236-40635590411434/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:21:19 compute-0 sudo[36670]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:20 compute-0 sudo[36822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgjwjkytqjffwouklbyyjjrwfuekgdvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275680.2940974-260-74139038258295/AnsiballZ_stat.py'
Dec 09 10:21:20 compute-0 sudo[36822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:20 compute-0 python3.9[36824]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:21:20 compute-0 sudo[36822]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:21 compute-0 sudo[36974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkkodwcjitfzggurcrktnpbrbxwtqier ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275680.8834739-268-65115665507999/AnsiballZ_command.py'
Dec 09 10:21:21 compute-0 sudo[36974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:21 compute-0 python3.9[36976]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:21:21 compute-0 sudo[36974]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:21 compute-0 sudo[37127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rilcpjeetewghngtyulfdktsdlcxhydi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275681.6434543-276-256693651907719/AnsiballZ_file.py'
Dec 09 10:21:21 compute-0 sudo[37127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:22 compute-0 python3.9[37129]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:21:22 compute-0 sudo[37127]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:22 compute-0 sudo[37279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbtxgdhglkogcotpsoepfuonzpvcxwog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275682.4710333-287-7296993678132/AnsiballZ_getent.py'
Dec 09 10:21:22 compute-0 sudo[37279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:23 compute-0 python3.9[37281]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 09 10:21:23 compute-0 sudo[37279]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:23 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 10:21:23 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 10:21:23 compute-0 sudo[37433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlikhdbiogjyiglmgrsjeiabfuqfkzae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275683.2717688-295-131173150538701/AnsiballZ_group.py'
Dec 09 10:21:23 compute-0 sudo[37433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:23 compute-0 python3.9[37435]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 09 10:21:23 compute-0 groupadd[37436]: group added to /etc/group: name=qemu, GID=107
Dec 09 10:21:23 compute-0 groupadd[37436]: group added to /etc/gshadow: name=qemu
Dec 09 10:21:23 compute-0 groupadd[37436]: new group: name=qemu, GID=107
Dec 09 10:21:23 compute-0 sudo[37433]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:24 compute-0 sudo[37591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddrmggatxfcnlaerjyrhwimtzqudvnwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275684.1464293-303-42888309508867/AnsiballZ_user.py'
Dec 09 10:21:24 compute-0 sudo[37591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:24 compute-0 python3.9[37593]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 09 10:21:24 compute-0 useradd[37595]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Dec 09 10:21:24 compute-0 sudo[37591]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:25 compute-0 sudo[37751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qummwdtcxwadexguhgipublqbrbyrbif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275685.1320438-311-64634430172176/AnsiballZ_getent.py'
Dec 09 10:21:25 compute-0 sudo[37751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:25 compute-0 python3.9[37753]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 09 10:21:25 compute-0 sudo[37751]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:26 compute-0 sudo[37904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kosidqujbnhuaqujsxuioxzqjurdzqzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275685.819741-319-110670554409377/AnsiballZ_group.py'
Dec 09 10:21:26 compute-0 sudo[37904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:26 compute-0 python3.9[37906]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 09 10:21:26 compute-0 groupadd[37907]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 09 10:21:26 compute-0 groupadd[37907]: group added to /etc/gshadow: name=hugetlbfs
Dec 09 10:21:26 compute-0 groupadd[37907]: new group: name=hugetlbfs, GID=42477
Dec 09 10:21:26 compute-0 sudo[37904]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:27 compute-0 sudo[38062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koktdkmoxyyznamciosqlwqcrnqypdot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275686.747363-328-248519396461787/AnsiballZ_file.py'
Dec 09 10:21:27 compute-0 sudo[38062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:27 compute-0 python3.9[38064]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 09 10:21:27 compute-0 sudo[38062]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:27 compute-0 sudo[38214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aaovetgzsqiibxzzpvjaieyykysbrcqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275687.5727358-339-8617630725737/AnsiballZ_dnf.py'
Dec 09 10:21:27 compute-0 sudo[38214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:28 compute-0 python3.9[38216]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 10:21:29 compute-0 sudo[38214]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:30 compute-0 sudo[38367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyxibmecfmysfsbzhaubvnemyyocpfls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275689.9480083-347-163807299471347/AnsiballZ_file.py'
Dec 09 10:21:30 compute-0 sudo[38367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:30 compute-0 python3.9[38369]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:21:30 compute-0 sudo[38367]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:30 compute-0 sudo[38519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgnsvsafvaquokiabcwnshduemomyqun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275690.5965059-355-206399356458261/AnsiballZ_stat.py'
Dec 09 10:21:30 compute-0 sudo[38519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:31 compute-0 python3.9[38521]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:21:31 compute-0 sudo[38519]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:31 compute-0 sudo[38642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koimjkvzizgjfabqkderalmkaaacswtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275690.5965059-355-206399356458261/AnsiballZ_copy.py'
Dec 09 10:21:31 compute-0 sudo[38642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:31 compute-0 python3.9[38644]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765275690.5965059-355-206399356458261/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:21:31 compute-0 sudo[38642]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:32 compute-0 sudo[38794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaazcukgqzxssflgeimtdxpczwehgmrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275691.8569846-370-177006273678631/AnsiballZ_systemd.py'
Dec 09 10:21:32 compute-0 sudo[38794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:32 compute-0 python3.9[38796]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:21:32 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 09 10:21:32 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 09 10:21:32 compute-0 kernel: Bridge firewalling registered
Dec 09 10:21:32 compute-0 systemd-modules-load[38800]: Inserted module 'br_netfilter'
Dec 09 10:21:32 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 09 10:21:32 compute-0 sudo[38794]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:33 compute-0 sudo[38953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjxbetmbsehpwhvcslcrrndopbgvtnpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275693.0635216-378-147068434606658/AnsiballZ_stat.py'
Dec 09 10:21:33 compute-0 sudo[38953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:33 compute-0 python3.9[38955]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:21:33 compute-0 sudo[38953]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:33 compute-0 sudo[39076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nezsszdbruiapokrsbgsqevblukcmgir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275693.0635216-378-147068434606658/AnsiballZ_copy.py'
Dec 09 10:21:33 compute-0 sudo[39076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:34 compute-0 python3.9[39078]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765275693.0635216-378-147068434606658/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:21:34 compute-0 sudo[39076]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:34 compute-0 sudo[39228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmeywikwsazlpkxuaagxhugxjarswqcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275694.5409126-396-128278777782891/AnsiballZ_dnf.py'
Dec 09 10:21:34 compute-0 sudo[39228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:35 compute-0 python3.9[39230]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 10:21:38 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Dec 09 10:21:38 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Dec 09 10:21:38 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 10:21:38 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 10:21:38 compute-0 systemd[1]: Reloading.
Dec 09 10:21:38 compute-0 systemd-rc-local-generator[39293]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:21:39 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 10:21:40 compute-0 sudo[39228]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:41 compute-0 python3.9[41301]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:21:41 compute-0 python3.9[42347]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 09 10:21:42 compute-0 python3.9[43062]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:21:42 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 10:21:42 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 10:21:42 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.777s CPU time.
Dec 09 10:21:42 compute-0 systemd[1]: run-rfd6922ee7b38422ba3a8de83e46ce125.service: Deactivated successfully.
Dec 09 10:21:43 compute-0 sudo[43428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beapxtqjdqozmiujvyzkovbsrtfrlpfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275702.8685193-435-98405488029695/AnsiballZ_command.py'
Dec 09 10:21:43 compute-0 sudo[43428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:43 compute-0 python3.9[43430]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:21:43 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 09 10:21:44 compute-0 systemd[1]: Starting Authorization Manager...
Dec 09 10:21:44 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 09 10:21:44 compute-0 polkitd[43647]: Started polkitd version 0.117
Dec 09 10:21:44 compute-0 polkitd[43647]: Loading rules from directory /etc/polkit-1/rules.d
Dec 09 10:21:44 compute-0 polkitd[43647]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 09 10:21:44 compute-0 polkitd[43647]: Finished loading, compiling and executing 2 rules
Dec 09 10:21:44 compute-0 polkitd[43647]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 09 10:21:44 compute-0 systemd[1]: Started Authorization Manager.
Dec 09 10:21:44 compute-0 sudo[43428]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:45 compute-0 sudo[43816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjfpabuouadwncoibojdlccfezykfbbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275704.8230028-444-119409867514996/AnsiballZ_systemd.py'
Dec 09 10:21:45 compute-0 sudo[43816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:45 compute-0 python3.9[43818]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:21:45 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 09 10:21:45 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 09 10:21:45 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 09 10:21:45 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 09 10:21:45 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 09 10:21:45 compute-0 sudo[43816]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:46 compute-0 python3.9[43979]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 09 10:21:48 compute-0 sudo[44129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edjqtsxfdcongufwvdnczxnbqlwhpnbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275708.130501-501-217770158983292/AnsiballZ_systemd.py'
Dec 09 10:21:48 compute-0 sudo[44129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:48 compute-0 python3.9[44131]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:21:48 compute-0 systemd[1]: Reloading.
Dec 09 10:21:48 compute-0 systemd-rc-local-generator[44152]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:21:49 compute-0 sudo[44129]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:49 compute-0 sudo[44317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feqnmmjwqvtpuarjkbdtbplfwbdbswhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275709.304411-501-154740711430067/AnsiballZ_systemd.py'
Dec 09 10:21:49 compute-0 sudo[44317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:49 compute-0 python3.9[44319]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:21:49 compute-0 systemd[1]: Reloading.
Dec 09 10:21:49 compute-0 systemd-rc-local-generator[44347]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:21:50 compute-0 sudo[44317]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:50 compute-0 sudo[44506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vujxzkdoieebzfcrubalvqvmgdlbeucu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275710.3346426-517-236310757881780/AnsiballZ_command.py'
Dec 09 10:21:50 compute-0 sudo[44506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:50 compute-0 python3.9[44508]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:21:50 compute-0 sudo[44506]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:51 compute-0 sudo[44659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saxmmeldxynkrlbtigbnlbdwkogqrkar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275711.0180418-525-232737701214711/AnsiballZ_command.py'
Dec 09 10:21:51 compute-0 sudo[44659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:51 compute-0 python3.9[44661]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:21:51 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 09 10:21:51 compute-0 sudo[44659]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:51 compute-0 sudo[44812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etfwkiyskjisrkttvmpqkvvufkpcchmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275711.6837313-533-93758483861010/AnsiballZ_command.py'
Dec 09 10:21:51 compute-0 sudo[44812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:52 compute-0 python3.9[44814]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:21:53 compute-0 sudo[44812]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:54 compute-0 sudo[44974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xomanxcbqnuibzsrurcwhbpsyzlzicup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275713.6836398-541-73923556132560/AnsiballZ_command.py'
Dec 09 10:21:54 compute-0 sudo[44974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:54 compute-0 python3.9[44976]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:21:54 compute-0 sudo[44974]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:54 compute-0 sudo[45127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrhwdicqkdtebmtjmjskslmtvyxcbcsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275714.383884-549-98479504918646/AnsiballZ_systemd.py'
Dec 09 10:21:54 compute-0 sudo[45127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:21:54 compute-0 python3.9[45129]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:21:55 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 09 10:21:55 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Dec 09 10:21:55 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Dec 09 10:21:55 compute-0 systemd[1]: Starting Apply Kernel Variables...
Dec 09 10:21:55 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 09 10:21:55 compute-0 systemd[1]: Finished Apply Kernel Variables.
Dec 09 10:21:55 compute-0 sudo[45127]: pam_unix(sudo:session): session closed for user root
Dec 09 10:21:55 compute-0 sshd-session[43797]: error: kex_exchange_identification: read: Connection timed out
Dec 09 10:21:55 compute-0 sshd-session[43797]: banner exchange: Connection from 27.148.182.148 port 39686: Connection timed out
Dec 09 10:21:55 compute-0 sshd-session[31481]: Connection closed by 192.168.122.30 port 60594
Dec 09 10:21:55 compute-0 sshd-session[31478]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:21:55 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Dec 09 10:21:55 compute-0 systemd[1]: session-10.scope: Consumed 2min 16.946s CPU time.
Dec 09 10:21:55 compute-0 systemd-logind[806]: Session 10 logged out. Waiting for processes to exit.
Dec 09 10:21:55 compute-0 systemd-logind[806]: Removed session 10.
Dec 09 10:22:01 compute-0 sshd-session[45159]: Accepted publickey for zuul from 192.168.122.30 port 33534 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:22:01 compute-0 systemd-logind[806]: New session 11 of user zuul.
Dec 09 10:22:01 compute-0 systemd[1]: Started Session 11 of User zuul.
Dec 09 10:22:01 compute-0 sshd-session[45159]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:22:02 compute-0 python3.9[45312]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:22:04 compute-0 python3.9[45466]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:22:05 compute-0 sudo[45620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyjndeumrymlsypzqufjockdtifdhmqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275724.8606732-50-252157772604683/AnsiballZ_command.py'
Dec 09 10:22:05 compute-0 sudo[45620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:05 compute-0 python3.9[45622]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:22:05 compute-0 sudo[45620]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:06 compute-0 python3.9[45773]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:22:07 compute-0 sudo[45927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdnzbvxunhkzecwzsmvjlmebygzvdacs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275726.9997613-70-97625608829687/AnsiballZ_setup.py'
Dec 09 10:22:07 compute-0 sudo[45927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:07 compute-0 python3.9[45929]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 10:22:07 compute-0 sudo[45927]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:09 compute-0 sudo[46011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdnuvsqmrbstpqinldlxpmwokqteeyld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275726.9997613-70-97625608829687/AnsiballZ_dnf.py'
Dec 09 10:22:09 compute-0 sudo[46011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:09 compute-0 python3.9[46013]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 10:22:10 compute-0 sudo[46011]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:11 compute-0 sudo[46164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylfevxajkdgchrgfuelmgxhifbfddhdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275730.7572422-82-130760442305933/AnsiballZ_setup.py'
Dec 09 10:22:11 compute-0 sudo[46164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:11 compute-0 python3.9[46166]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 10:22:11 compute-0 sudo[46164]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:12 compute-0 sudo[46335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhpzrpsdyvpnrukaggrawzypysiqbbkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275731.811947-93-165453487467360/AnsiballZ_file.py'
Dec 09 10:22:12 compute-0 sudo[46335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:12 compute-0 python3.9[46337]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:22:12 compute-0 sudo[46335]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:13 compute-0 sudo[46487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfprwrpbzdwgwpuwcjdxhealsojzthyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275732.7558014-101-57408422347688/AnsiballZ_command.py'
Dec 09 10:22:13 compute-0 sudo[46487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:13 compute-0 python3.9[46489]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:22:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat826056121-merged.mount: Deactivated successfully.
Dec 09 10:22:13 compute-0 podman[46490]: 2025-12-09 10:22:13.342446552 +0000 UTC m=+0.060836813 system refresh
Dec 09 10:22:13 compute-0 sudo[46487]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:14 compute-0 sudo[46650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxzzfpyipiyjmkrczoxijrdotwwvlsei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275733.5331376-109-212200433808551/AnsiballZ_stat.py'
Dec 09 10:22:14 compute-0 sudo[46650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:14 compute-0 python3.9[46652]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:22:14 compute-0 sudo[46650]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:22:14 compute-0 sudo[46773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcqvretuiprvbrzeikokdyxtflpbxiht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275733.5331376-109-212200433808551/AnsiballZ_copy.py'
Dec 09 10:22:14 compute-0 sudo[46773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:15 compute-0 python3.9[46775]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765275733.5331376-109-212200433808551/.source.json follow=False _original_basename=podman_network_config.j2 checksum=938d75e8df053001260675f2a6ecbedd13d6884b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:22:15 compute-0 sudo[46773]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:15 compute-0 sudo[46925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfkjdwgiztqdqfzwjddiijbtlsnoadmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275735.2148852-124-102073897701465/AnsiballZ_stat.py'
Dec 09 10:22:15 compute-0 sudo[46925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:15 compute-0 python3.9[46927]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:22:15 compute-0 sudo[46925]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:16 compute-0 sudo[47048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yltpfigijmjjmqqtrtbiljsydlcwpceu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275735.2148852-124-102073897701465/AnsiballZ_copy.py'
Dec 09 10:22:16 compute-0 sudo[47048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:16 compute-0 python3.9[47050]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765275735.2148852-124-102073897701465/.source.conf follow=False _original_basename=registries.conf.j2 checksum=75cbff578cac25096c07a1fc71278e69a134eb3a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:22:16 compute-0 sudo[47048]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:17 compute-0 sudo[47200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuxilhbjtpfzkvfptlvtzuyzgtpwczyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275736.7024357-140-272042790448432/AnsiballZ_ini_file.py'
Dec 09 10:22:17 compute-0 sudo[47200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:17 compute-0 python3.9[47202]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:22:17 compute-0 sudo[47200]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:17 compute-0 sudo[47352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-namzkmdcahoospvmjeqwumefmzovfplo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275737.558935-140-195557570350671/AnsiballZ_ini_file.py'
Dec 09 10:22:17 compute-0 sudo[47352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:18 compute-0 python3.9[47354]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:22:18 compute-0 sudo[47352]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:18 compute-0 sudo[47504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mflomnraesnpymnlbpgsnygeofgjnirw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275738.3003278-140-213877375204375/AnsiballZ_ini_file.py'
Dec 09 10:22:18 compute-0 sudo[47504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:18 compute-0 python3.9[47506]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:22:18 compute-0 sudo[47504]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:19 compute-0 sudo[47656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niskcydgkarzcxexbpwfaqdbasxmdfei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275738.9709558-140-101030045694283/AnsiballZ_ini_file.py'
Dec 09 10:22:19 compute-0 sudo[47656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:19 compute-0 python3.9[47658]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:22:19 compute-0 sudo[47656]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:20 compute-0 python3.9[47808]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:22:21 compute-0 sudo[47960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcaufqyutzjavzsdijoythdowtevpvbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275740.8501441-180-214993377201843/AnsiballZ_dnf.py'
Dec 09 10:22:21 compute-0 sudo[47960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:21 compute-0 python3.9[47962]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 09 10:22:22 compute-0 sudo[47960]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:23 compute-0 sudo[48113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqtohjqqwjfpfrishgkmggjqlfsvzktx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275742.832669-188-164649728440206/AnsiballZ_dnf.py'
Dec 09 10:22:23 compute-0 sudo[48113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:23 compute-0 python3.9[48115]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 09 10:22:25 compute-0 sudo[48113]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:25 compute-0 sudo[48273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acgomcmhzwnxfscraoaoubwcdswreoth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275745.6813104-198-261861408499963/AnsiballZ_dnf.py'
Dec 09 10:22:25 compute-0 sudo[48273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:26 compute-0 python3.9[48275]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 09 10:22:27 compute-0 sudo[48273]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:27 compute-0 sudo[48426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nemgztrunttwwgayygqjdrcatpqmumka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275747.6516411-207-18787739552901/AnsiballZ_dnf.py'
Dec 09 10:22:28 compute-0 sudo[48426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:28 compute-0 python3.9[48428]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 09 10:22:29 compute-0 sudo[48426]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:30 compute-0 sudo[48579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmxtjpyxsmnfilmaftibvvgxhakrhwae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275749.9019637-218-37978698049697/AnsiballZ_dnf.py'
Dec 09 10:22:30 compute-0 sudo[48579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:30 compute-0 python3.9[48581]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 09 10:22:32 compute-0 sudo[48579]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:32 compute-0 sudo[48735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkchhmtbhvslpwusiekdhtlgkgayudnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275752.24045-226-210350948658506/AnsiballZ_dnf.py'
Dec 09 10:22:32 compute-0 sudo[48735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:32 compute-0 python3.9[48737]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 09 10:22:35 compute-0 sudo[48735]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:36 compute-0 sudo[48903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itfntauoblieylwkxvwsrxlqnycbvefz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275756.1180506-235-140937735111939/AnsiballZ_dnf.py'
Dec 09 10:22:36 compute-0 sudo[48903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:36 compute-0 python3.9[48905]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 09 10:22:37 compute-0 sudo[48903]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:38 compute-0 sudo[49056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bprjrscaupythvpwtvqdimpetlkbrvyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275758.1704328-244-51216393108064/AnsiballZ_dnf.py'
Dec 09 10:22:38 compute-0 sudo[49056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:38 compute-0 python3.9[49058]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 09 10:22:51 compute-0 sudo[49056]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:52 compute-0 sudo[49394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqtjsmgkzmcekeshfigyqwkrgylqmxsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275772.0942547-253-245059862606580/AnsiballZ_dnf.py'
Dec 09 10:22:52 compute-0 sudo[49394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:52 compute-0 python3.9[49396]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 09 10:22:54 compute-0 sudo[49394]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:54 compute-0 sudo[49550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czjnhwkpkvmmdpazyxfssczsvzxdgybo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275774.3195956-264-122306580951775/AnsiballZ_file.py'
Dec 09 10:22:54 compute-0 sudo[49550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:54 compute-0 python3.9[49552]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:22:54 compute-0 sudo[49550]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:55 compute-0 sudo[49725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eestxrwjlqacizupufghywtmouoaoqtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275775.016902-272-262647550649571/AnsiballZ_stat.py'
Dec 09 10:22:55 compute-0 sudo[49725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:55 compute-0 python3.9[49727]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:22:55 compute-0 sudo[49725]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:56 compute-0 sudo[49848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otmqmgtazihanimruylbydzojbhzpgwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275775.016902-272-262647550649571/AnsiballZ_copy.py'
Dec 09 10:22:56 compute-0 sudo[49848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:56 compute-0 python3.9[49850]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765275775.016902-272-262647550649571/.source.json _original_basename=.oogsf4cq follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:22:56 compute-0 sudo[49848]: pam_unix(sudo:session): session closed for user root
Dec 09 10:22:57 compute-0 sudo[50000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grzhcugmnletuvpbrqstiinkvrihluvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275776.572221-290-104764598225331/AnsiballZ_podman_image.py'
Dec 09 10:22:57 compute-0 sudo[50000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:22:57 compute-0 python3.9[50002]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 09 10:22:57 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2186088384-lower\x2dmapped.mount: Deactivated successfully.
Dec 09 10:23:03 compute-0 podman[50015]: 2025-12-09 10:23:03.640938768 +0000 UTC m=+6.241893103 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 09 10:23:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:03 compute-0 sudo[50000]: pam_unix(sudo:session): session closed for user root
Dec 09 10:23:04 compute-0 sudo[50309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fryxvrxymepivkftyoqenhvnlqlpkble ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275784.1952796-301-9084883766316/AnsiballZ_podman_image.py'
Dec 09 10:23:04 compute-0 sudo[50309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:23:04 compute-0 python3.9[50311]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 09 10:23:04 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:17 compute-0 podman[50323]: 2025-12-09 10:23:17.256255693 +0000 UTC m=+12.519198874 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 09 10:23:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:17 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:17 compute-0 sudo[50309]: pam_unix(sudo:session): session closed for user root
Dec 09 10:23:18 compute-0 sudo[50615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlqrncexzzoownhfepksymhjbnfxtehw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275797.7840347-311-180861817445021/AnsiballZ_podman_image.py'
Dec 09 10:23:18 compute-0 sudo[50615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:23:18 compute-0 python3.9[50617]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 09 10:23:18 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:19 compute-0 podman[50629]: 2025-12-09 10:23:19.833520819 +0000 UTC m=+1.442707292 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 09 10:23:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:20 compute-0 sudo[50615]: pam_unix(sudo:session): session closed for user root
Dec 09 10:23:20 compute-0 sudo[50861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-razhuqblhizxponsyqzlmpumhbgvgees ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275800.3053885-320-227690928834578/AnsiballZ_podman_image.py'
Dec 09 10:23:20 compute-0 sudo[50861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:23:20 compute-0 python3.9[50863]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 09 10:23:32 compute-0 podman[50874]: 2025-12-09 10:23:32.842862455 +0000 UTC m=+12.014954323 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 09 10:23:32 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:32 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:32 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:33 compute-0 sudo[50861]: pam_unix(sudo:session): session closed for user root
Dec 09 10:23:33 compute-0 sudo[51140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcwctqicnnilntofagbugayscrmkixdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275813.491259-331-121613155122811/AnsiballZ_podman_image.py'
Dec 09 10:23:33 compute-0 sudo[51140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:23:34 compute-0 python3.9[51142]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 09 10:23:34 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:52 compute-0 podman[51154]: 2025-12-09 10:23:52.118744242 +0000 UTC m=+18.036640620 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec 09 10:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:52 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:52 compute-0 sudo[51140]: pam_unix(sudo:session): session closed for user root
Dec 09 10:23:52 compute-0 sudo[51481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vujxjczhdiezqwcivdppdcgkpxvdymcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275832.4529438-331-60681243124675/AnsiballZ_podman_image.py'
Dec 09 10:23:52 compute-0 sudo[51481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:23:52 compute-0 python3.9[51483]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 09 10:23:54 compute-0 podman[51496]: 2025-12-09 10:23:54.191523618 +0000 UTC m=+1.159382361 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec 09 10:23:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:54 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:54 compute-0 sudo[51481]: pam_unix(sudo:session): session closed for user root
Dec 09 10:23:54 compute-0 sudo[51769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtseksgcfptjeguhpafurifknrjwuamy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275834.6476834-347-183664210151719/AnsiballZ_podman_image.py'
Dec 09 10:23:54 compute-0 sudo[51769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:23:55 compute-0 python3.9[51771]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 09 10:23:58 compute-0 podman[51783]: 2025-12-09 10:23:58.322892877 +0000 UTC m=+3.140946801 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec 09 10:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:23:58 compute-0 sudo[51769]: pam_unix(sudo:session): session closed for user root
Dec 09 10:23:59 compute-0 sudo[52037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uclpxzvymcrnjvvmvewtmqgvedgcgony ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275838.7399998-347-260943839852120/AnsiballZ_podman_image.py'
Dec 09 10:23:59 compute-0 sudo[52037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:23:59 compute-0 python3.9[52039]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 09 10:23:59 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:24:06 compute-0 podman[52051]: 2025-12-09 10:24:06.331515471 +0000 UTC m=+6.931333623 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec 09 10:24:06 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:24:06 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:24:06 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:24:06 compute-0 sudo[52037]: pam_unix(sudo:session): session closed for user root
Dec 09 10:24:07 compute-0 sshd-session[45162]: Connection closed by 192.168.122.30 port 33534
Dec 09 10:24:07 compute-0 sshd-session[45159]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:24:07 compute-0 systemd-logind[806]: Session 11 logged out. Waiting for processes to exit.
Dec 09 10:24:07 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Dec 09 10:24:07 compute-0 systemd[1]: session-11.scope: Consumed 2min 25.907s CPU time.
Dec 09 10:24:07 compute-0 systemd-logind[806]: Removed session 11.
Dec 09 10:24:12 compute-0 sshd-session[52299]: Accepted publickey for zuul from 192.168.122.30 port 58776 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:24:12 compute-0 systemd-logind[806]: New session 12 of user zuul.
Dec 09 10:24:12 compute-0 systemd[1]: Started Session 12 of User zuul.
Dec 09 10:24:12 compute-0 sshd-session[52299]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:24:13 compute-0 python3.9[52452]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:24:14 compute-0 sudo[52606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eleoqctqgucqhezdakxkqxkzaxzkyymf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275854.1165915-36-131830143880411/AnsiballZ_getent.py'
Dec 09 10:24:14 compute-0 sudo[52606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:24:14 compute-0 python3.9[52608]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 09 10:24:14 compute-0 sudo[52606]: pam_unix(sudo:session): session closed for user root
Dec 09 10:24:15 compute-0 sudo[52759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlboolfuqvlxvbpfqfadjbhdyxqxzuta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275855.0006697-44-39308110536335/AnsiballZ_group.py'
Dec 09 10:24:15 compute-0 sudo[52759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:24:15 compute-0 python3.9[52761]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 09 10:24:15 compute-0 groupadd[52762]: group added to /etc/group: name=openvswitch, GID=42476
Dec 09 10:24:15 compute-0 groupadd[52762]: group added to /etc/gshadow: name=openvswitch
Dec 09 10:24:15 compute-0 groupadd[52762]: new group: name=openvswitch, GID=42476
Dec 09 10:24:15 compute-0 sudo[52759]: pam_unix(sudo:session): session closed for user root
Dec 09 10:24:16 compute-0 sudo[52917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiphcvgitejppgiyfrrhgqyxukolrzay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275855.92266-52-8565509436610/AnsiballZ_user.py'
Dec 09 10:24:16 compute-0 sudo[52917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:24:16 compute-0 python3.9[52919]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 09 10:24:16 compute-0 useradd[52921]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Dec 09 10:24:16 compute-0 useradd[52921]: add 'openvswitch' to group 'hugetlbfs'
Dec 09 10:24:16 compute-0 useradd[52921]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 09 10:24:16 compute-0 sudo[52917]: pam_unix(sudo:session): session closed for user root
Dec 09 10:24:17 compute-0 sudo[53077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaddokwmxosvtieoyrvsgtwdqqopsfom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275856.9625182-62-275887040530687/AnsiballZ_setup.py'
Dec 09 10:24:17 compute-0 sudo[53077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:24:17 compute-0 python3.9[53079]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 10:24:17 compute-0 sudo[53077]: pam_unix(sudo:session): session closed for user root
Dec 09 10:24:18 compute-0 sudo[53161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbfltsvzusldwcnppimsjwgtdvvlzfqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275856.9625182-62-275887040530687/AnsiballZ_dnf.py'
Dec 09 10:24:18 compute-0 sudo[53161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:24:18 compute-0 python3.9[53163]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 09 10:24:19 compute-0 sudo[53161]: pam_unix(sudo:session): session closed for user root
Dec 09 10:24:20 compute-0 sudo[53323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kquyolegdcbjahqkbfnmofogsuxsanmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275860.0977485-76-274281943145169/AnsiballZ_dnf.py'
Dec 09 10:24:20 compute-0 sudo[53323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:24:20 compute-0 python3.9[53325]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 10:24:27 compute-0 sshd-session[53340]: Connection closed by 159.223.8.217 port 44228
Dec 09 10:24:36 compute-0 kernel: SELinux:  Converting 2731 SID table entries...
Dec 09 10:24:36 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 10:24:36 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 09 10:24:36 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 10:24:36 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 09 10:24:36 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 10:24:36 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 10:24:36 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 10:24:37 compute-0 groupadd[53349]: group added to /etc/group: name=unbound, GID=993
Dec 09 10:24:37 compute-0 groupadd[53349]: group added to /etc/gshadow: name=unbound
Dec 09 10:24:37 compute-0 groupadd[53349]: new group: name=unbound, GID=993
Dec 09 10:24:37 compute-0 useradd[53356]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 09 10:24:37 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 09 10:24:37 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 09 10:24:38 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 10:24:38 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 10:24:38 compute-0 systemd[1]: Reloading.
Dec 09 10:24:39 compute-0 systemd-rc-local-generator[53851]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:24:39 compute-0 systemd-sysv-generator[53857]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:24:39 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 10:24:42 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 10:24:42 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 10:24:42 compute-0 systemd[1]: run-r531628cc023549a280d2755f8dd0ecd8.service: Deactivated successfully.
Dec 09 10:24:43 compute-0 sudo[53323]: pam_unix(sudo:session): session closed for user root
Dec 09 10:24:44 compute-0 sudo[54421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srqhluwutpgqvpryauoepiiiflzzoqyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275883.8398793-84-40228540119456/AnsiballZ_systemd.py'
Dec 09 10:24:44 compute-0 sudo[54421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:24:44 compute-0 python3.9[54423]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 10:24:44 compute-0 systemd[1]: Reloading.
Dec 09 10:24:45 compute-0 systemd-rc-local-generator[54455]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:24:45 compute-0 systemd-sysv-generator[54458]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:24:45 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Dec 09 10:24:45 compute-0 chown[54465]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 09 10:24:45 compute-0 ovs-ctl[54470]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 09 10:24:46 compute-0 ovs-ctl[54470]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 09 10:24:46 compute-0 ovs-ctl[54470]: Starting ovsdb-server [  OK  ]
Dec 09 10:24:46 compute-0 ovs-vsctl[54519]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 09 10:24:46 compute-0 ovs-vsctl[54539]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"9ec27861-bbe8-48fb-b30f-25b967e1609e\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 09 10:24:46 compute-0 ovs-ctl[54470]: Configuring Open vSwitch system IDs [  OK  ]
Dec 09 10:24:46 compute-0 ovs-vsctl[54545]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 09 10:24:46 compute-0 ovs-ctl[54470]: Enabling remote OVSDB managers [  OK  ]
Dec 09 10:24:46 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Dec 09 10:24:46 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 09 10:24:46 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 09 10:24:46 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 09 10:24:46 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Dec 09 10:24:46 compute-0 ovs-ctl[54590]: Inserting openvswitch module [  OK  ]
Dec 09 10:24:46 compute-0 ovs-ctl[54559]: Starting ovs-vswitchd [  OK  ]
Dec 09 10:24:46 compute-0 ovs-vsctl[54607]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 09 10:24:46 compute-0 ovs-ctl[54559]: Enabling remote OVSDB managers [  OK  ]
Dec 09 10:24:46 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 09 10:24:46 compute-0 systemd[1]: Starting Open vSwitch...
Dec 09 10:24:46 compute-0 systemd[1]: Finished Open vSwitch.
Dec 09 10:24:46 compute-0 sudo[54421]: pam_unix(sudo:session): session closed for user root
Dec 09 10:24:47 compute-0 python3.9[54759]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:24:48 compute-0 sudo[54909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imcpbpzzxfneylfoawlxvgxxrjuszauy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275887.60536-102-148377046505846/AnsiballZ_sefcontext.py'
Dec 09 10:24:48 compute-0 sudo[54909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:24:48 compute-0 python3.9[54911]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 09 10:24:49 compute-0 kernel: SELinux:  Converting 2745 SID table entries...
Dec 09 10:24:49 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 10:24:49 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 09 10:24:49 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 10:24:49 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 09 10:24:49 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 10:24:49 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 10:24:49 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 10:24:49 compute-0 sudo[54909]: pam_unix(sudo:session): session closed for user root
Dec 09 10:24:50 compute-0 python3.9[55066]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:24:51 compute-0 sudo[55222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pemtqvdxqkzegipkuirtrlqvowsydqoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275891.1032124-120-65052518726250/AnsiballZ_dnf.py'
Dec 09 10:24:51 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 09 10:24:51 compute-0 sudo[55222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:24:51 compute-0 python3.9[55224]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 10:24:52 compute-0 sudo[55222]: pam_unix(sudo:session): session closed for user root
Dec 09 10:24:53 compute-0 sudo[55375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlpuzaynsdvcmuatvphzhwhzoccxvfcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275893.0585964-128-241717682658095/AnsiballZ_command.py'
Dec 09 10:24:53 compute-0 sudo[55375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:24:53 compute-0 python3.9[55377]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:24:54 compute-0 sudo[55375]: pam_unix(sudo:session): session closed for user root
Dec 09 10:24:54 compute-0 sudo[55662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tisvcwkhbfrzlkhaljdkgnpismhhemmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275894.592705-136-67442990281773/AnsiballZ_file.py'
Dec 09 10:24:54 compute-0 sudo[55662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:24:55 compute-0 python3.9[55664]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 09 10:24:55 compute-0 sudo[55662]: pam_unix(sudo:session): session closed for user root
Dec 09 10:24:55 compute-0 python3.9[55814]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:24:56 compute-0 sudo[55966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uakiwgqmwqrttlfmrelwrtmlckurhidf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275896.2865713-152-67746438702801/AnsiballZ_dnf.py'
Dec 09 10:24:56 compute-0 sudo[55966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:24:56 compute-0 python3.9[55968]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 10:24:59 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 10:24:59 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 10:24:59 compute-0 systemd[1]: Reloading.
Dec 09 10:24:59 compute-0 systemd-sysv-generator[56011]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:24:59 compute-0 systemd-rc-local-generator[56008]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:24:59 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 10:25:01 compute-0 sudo[55966]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:01 compute-0 sudo[56282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbwvhqxuplbaxnkjecjvskloabznftig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275901.2127905-160-61245966830550/AnsiballZ_systemd.py'
Dec 09 10:25:01 compute-0 sudo[56282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:01 compute-0 python3.9[56284]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:25:01 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 09 10:25:01 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Dec 09 10:25:01 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Dec 09 10:25:01 compute-0 systemd[1]: Stopping Network Manager...
Dec 09 10:25:01 compute-0 NetworkManager[7184]: <info>  [1765275901.8110] caught SIGTERM, shutting down normally.
Dec 09 10:25:01 compute-0 NetworkManager[7184]: <info>  [1765275901.8128] dhcp4 (eth0): canceled DHCP transaction
Dec 09 10:25:01 compute-0 NetworkManager[7184]: <info>  [1765275901.8129] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 10:25:01 compute-0 NetworkManager[7184]: <info>  [1765275901.8129] dhcp4 (eth0): state changed no lease
Dec 09 10:25:01 compute-0 NetworkManager[7184]: <info>  [1765275901.8134] manager: NetworkManager state is now CONNECTED_SITE
Dec 09 10:25:01 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 10:25:01 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 10:25:02 compute-0 NetworkManager[7184]: <info>  [1765275902.2299] exiting (success)
Dec 09 10:25:02 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 09 10:25:02 compute-0 systemd[1]: Stopped Network Manager.
Dec 09 10:25:02 compute-0 systemd[1]: NetworkManager.service: Consumed 15.787s CPU time, 4.1M memory peak, read 0B from disk, written 31.5K to disk.
Dec 09 10:25:02 compute-0 systemd[1]: Starting Network Manager...
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.2932] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:f43569a1-1096-4e67-91b2-bda287c55398)
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.2934] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.2983] manager[0x55c704c6f000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 09 10:25:02 compute-0 systemd[1]: Starting Hostname Service...
Dec 09 10:25:02 compute-0 systemd[1]: Started Hostname Service.
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3770] hostname: hostname: using hostnamed
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3771] hostname: static hostname changed from (none) to "compute-0"
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3775] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3779] manager[0x55c704c6f000]: rfkill: Wi-Fi hardware radio set enabled
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3779] manager[0x55c704c6f000]: rfkill: WWAN hardware radio set enabled
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3802] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3811] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3811] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3812] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3812] manager: Networking is enabled by state file
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3814] settings: Loaded settings plugin: keyfile (internal)
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3818] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3836] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3843] dhcp: init: Using DHCP client 'internal'
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3845] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3849] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3853] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3858] device (lo): Activation: starting connection 'lo' (4d2460cc-3851-4697-811d-bb6085f75db6)
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3863] device (eth0): carrier: link connected
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3866] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3870] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3870] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3874] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3878] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3882] device (eth1): carrier: link connected
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3885] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3890] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (6b6a22e5-bf6a-510d-869a-e83c7a7cb57f) (indicated)
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3890] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3895] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3900] device (eth1): Activation: starting connection 'ci-private-network' (6b6a22e5-bf6a-510d-869a-e83c7a7cb57f)
Dec 09 10:25:02 compute-0 systemd[1]: Started Network Manager.
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3904] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3909] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3922] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3923] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3926] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3928] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3930] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3932] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3943] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3949] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3952] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3958] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3970] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3977] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3979] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3983] device (lo): Activation: successful, device activated.
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3989] dhcp4 (eth0): state changed new lease, address=38.102.83.201
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.3995] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 09 10:25:02 compute-0 systemd[1]: Starting Network Manager Wait Online...
Dec 09 10:25:02 compute-0 sudo[56282]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:02 compute-0 sudo[56477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqznxessfxayllxtzwgxvycfzvsdefgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275902.6218464-168-45092463309993/AnsiballZ_dnf.py'
Dec 09 10:25:02 compute-0 sudo[56477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.9701] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.9729] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.9737] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.9741] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.9745] device (eth1): Activation: successful, device activated.
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.9782] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.9785] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.9790] manager: NetworkManager state is now CONNECTED_SITE
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.9794] device (eth0): Activation: successful, device activated.
Dec 09 10:25:02 compute-0 NetworkManager[56302]: <info>  [1765275902.9800] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 09 10:25:03 compute-0 NetworkManager[56302]: <info>  [1765275903.0889] manager: startup complete
Dec 09 10:25:03 compute-0 systemd[1]: Finished Network Manager Wait Online.
Dec 09 10:25:03 compute-0 python3.9[56479]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 10:25:03 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 10:25:03 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 10:25:03 compute-0 systemd[1]: run-r479ddfe5a6484992afedacf8ad2bd388.service: Deactivated successfully.
Dec 09 10:25:13 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 10:25:14 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 10:25:14 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 10:25:14 compute-0 systemd[1]: Reloading.
Dec 09 10:25:14 compute-0 systemd-rc-local-generator[56560]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:25:14 compute-0 systemd-sysv-generator[56566]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:25:15 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 10:25:15 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 10:25:15 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 10:25:15 compute-0 systemd[1]: run-rb7bd3c19306b4ca98676e6f6b27f9e43.service: Deactivated successfully.
Dec 09 10:25:16 compute-0 sudo[56477]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:16 compute-0 sudo[56968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozkahyugsmocndpplyftdjrtmeiucepp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275916.4579382-180-268547369639427/AnsiballZ_stat.py'
Dec 09 10:25:16 compute-0 sudo[56968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:16 compute-0 python3.9[56970]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:25:16 compute-0 sudo[56968]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:17 compute-0 sudo[57120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrzoxfrieyxftpdrtaakkqqkuotnehxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275917.1332414-189-82246567285408/AnsiballZ_ini_file.py'
Dec 09 10:25:17 compute-0 sudo[57120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:17 compute-0 python3.9[57122]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:25:17 compute-0 sudo[57120]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:18 compute-0 sudo[57274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugnqmglkqycxlalmmhpkpfacfbuodwhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275917.9592714-199-49738141341892/AnsiballZ_ini_file.py'
Dec 09 10:25:18 compute-0 sudo[57274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:18 compute-0 python3.9[57276]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:25:18 compute-0 sudo[57274]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:18 compute-0 sudo[57426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsinnrcocqranrzvabreshhmirljxzmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275918.5865664-199-55033515213951/AnsiballZ_ini_file.py'
Dec 09 10:25:18 compute-0 sudo[57426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:19 compute-0 python3.9[57428]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:25:19 compute-0 sudo[57426]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:19 compute-0 sudo[57578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbdvdlxrcfxgywovbcjpkudeivvbbpub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275919.2713845-214-89875457707730/AnsiballZ_ini_file.py'
Dec 09 10:25:19 compute-0 sudo[57578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:19 compute-0 python3.9[57580]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:25:19 compute-0 sudo[57578]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:20 compute-0 sudo[57730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yskrxbjsuadzvevslzzuhuizhbiziqzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275919.957506-214-189954837978247/AnsiballZ_ini_file.py'
Dec 09 10:25:20 compute-0 sudo[57730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:20 compute-0 python3.9[57732]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:25:20 compute-0 sudo[57730]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:20 compute-0 sudo[57882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seoswpisdziihzjcedpyljimsxshxjbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275920.5907054-229-168831412801827/AnsiballZ_stat.py'
Dec 09 10:25:20 compute-0 sudo[57882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:21 compute-0 python3.9[57884]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:25:21 compute-0 sudo[57882]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:21 compute-0 sudo[58005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uluispxfvpijrnuwsgejuxitjkgbhhym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275920.5907054-229-168831412801827/AnsiballZ_copy.py'
Dec 09 10:25:21 compute-0 sudo[58005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:22 compute-0 python3.9[58007]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765275920.5907054-229-168831412801827/.source _original_basename=.bkg_i5lg follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:25:22 compute-0 sudo[58005]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:22 compute-0 sudo[58157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqiophmmdlafhkffkdfooqmpjfiktmbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275922.2224514-244-50972779670969/AnsiballZ_file.py'
Dec 09 10:25:22 compute-0 sudo[58157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:22 compute-0 python3.9[58159]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:25:22 compute-0 sudo[58157]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:23 compute-0 sudo[58309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bipstneeeorsegojbohsvfdfbrgafitp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275922.8795927-252-133543807810931/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 09 10:25:23 compute-0 sudo[58309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:23 compute-0 python3.9[58311]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 09 10:25:23 compute-0 sudo[58309]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:24 compute-0 sudo[58461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsudwahtdejvkaktubaiuozxkuwsaoqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275923.6971009-261-118883540768941/AnsiballZ_file.py'
Dec 09 10:25:24 compute-0 sudo[58461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:24 compute-0 python3.9[58463]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:25:24 compute-0 sudo[58461]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:24 compute-0 sudo[58613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uogbleqesddeugorezipqvfgconbayik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275924.4981716-271-223762541392728/AnsiballZ_stat.py'
Dec 09 10:25:24 compute-0 sudo[58613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:25 compute-0 sudo[58613]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:25 compute-0 sudo[58736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anmxtllwssoyizndamapuvlmihrriini ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275924.4981716-271-223762541392728/AnsiballZ_copy.py'
Dec 09 10:25:25 compute-0 sudo[58736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:25 compute-0 sudo[58736]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:26 compute-0 sudo[58888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmegmycouolxyxwfnpufkiedstvnzwxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275926.0664651-286-244074571041289/AnsiballZ_slurp.py'
Dec 09 10:25:26 compute-0 sudo[58888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:26 compute-0 python3.9[58890]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 09 10:25:26 compute-0 sudo[58888]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:27 compute-0 sudo[59063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbyrgwjspjnsfjsyphfitksgkbynltco ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275926.9524686-295-227564661618172/async_wrapper.py j839379842523 300 /home/zuul/.ansible/tmp/ansible-tmp-1765275926.9524686-295-227564661618172/AnsiballZ_edpm_os_net_config.py _'
Dec 09 10:25:27 compute-0 sudo[59063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:28 compute-0 ansible-async_wrapper.py[59065]: Invoked with j839379842523 300 /home/zuul/.ansible/tmp/ansible-tmp-1765275926.9524686-295-227564661618172/AnsiballZ_edpm_os_net_config.py _
Dec 09 10:25:28 compute-0 ansible-async_wrapper.py[59068]: Starting module and watcher
Dec 09 10:25:28 compute-0 ansible-async_wrapper.py[59068]: Start watching 59069 (300)
Dec 09 10:25:28 compute-0 ansible-async_wrapper.py[59069]: Start module (59069)
Dec 09 10:25:28 compute-0 ansible-async_wrapper.py[59065]: Return async_wrapper task started.
Dec 09 10:25:28 compute-0 sudo[59063]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:28 compute-0 python3.9[59070]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 09 10:25:29 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 09 10:25:29 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 09 10:25:29 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 09 10:25:29 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 09 10:25:29 compute-0 kernel: cfg80211: failed to load regulatory.db
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.2193] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.2220] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.2841] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.2842] audit: op="connection-add" uuid="283dc479-3dc8-4c77-a52b-aa6ae2f291b4" name="br-ex-br" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.2858] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.2859] audit: op="connection-add" uuid="4e610b41-b7c5-45a1-bd61-f214c02ef3cf" name="br-ex-port" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.2870] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.2870] audit: op="connection-add" uuid="674fe76a-bc52-4f3f-874a-b130701b2895" name="eth1-port" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.2881] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.2882] audit: op="connection-add" uuid="4ce4554a-760c-4435-988b-77746bb27c13" name="vlan20-port" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.2891] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.2892] audit: op="connection-add" uuid="8aefaa47-7de4-43e6-b444-ee39152781f4" name="vlan21-port" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.2902] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.2903] audit: op="connection-add" uuid="2e90ec5a-db3a-4276-b632-c7580998392e" name="vlan22-port" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.2923] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,connection.timestamp" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.2942] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.2944] audit: op="connection-add" uuid="30be0f96-effb-4c5b-9c38-7df15607b059" name="br-ex-if" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3011] audit: op="connection-update" uuid="6b6a22e5-bf6a-510d-869a-e83c7a7cb57f" name="ci-private-network" args="ovs-external-ids.data,ipv6.method,ipv6.routes,ipv6.dns,ipv6.addr-gen-mode,ipv6.routing-rules,ipv6.addresses,ovs-interface.type,ipv4.method,ipv4.routes,ipv4.dns,ipv4.routing-rules,ipv4.addresses,ipv4.never-default,connection.port-type,connection.timestamp,connection.controller,connection.slave-type,connection.master" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3029] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3030] audit: op="connection-add" uuid="5c4dfb5e-4fe6-4043-878d-08e89025bbc8" name="vlan20-if" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3047] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3048] audit: op="connection-add" uuid="733f9516-1b97-44d1-8a52-f240114e899e" name="vlan21-if" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3066] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3068] audit: op="connection-add" uuid="57931a83-d7e9-40c7-b5d5-0350f3cd0d8a" name="vlan22-if" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3083] audit: op="connection-delete" uuid="c9d71888-3b72-38d5-8bab-6a45e2651a1e" name="Wired connection 1" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3097] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <warn>  [1765275930.3100] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3105] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3109] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (283dc479-3dc8-4c77-a52b-aa6ae2f291b4)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3109] audit: op="connection-activate" uuid="283dc479-3dc8-4c77-a52b-aa6ae2f291b4" name="br-ex-br" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3110] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <warn>  [1765275930.3111] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3115] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3119] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (4e610b41-b7c5-45a1-bd61-f214c02ef3cf)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3120] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <warn>  [1765275930.3120] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3124] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3128] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (674fe76a-bc52-4f3f-874a-b130701b2895)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3129] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <warn>  [1765275930.3130] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3134] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3138] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (4ce4554a-760c-4435-988b-77746bb27c13)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3139] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <warn>  [1765275930.3142] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3146] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3150] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (8aefaa47-7de4-43e6-b444-ee39152781f4)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3151] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <warn>  [1765275930.3152] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3157] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3161] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (2e90ec5a-db3a-4276-b632-c7580998392e)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3161] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3163] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3165] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3170] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <warn>  [1765275930.3171] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3174] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3178] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (30be0f96-effb-4c5b-9c38-7df15607b059)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3178] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3181] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3183] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3184] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3186] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3195] device (eth1): disconnecting for new activation request.
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3195] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3209] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3211] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3213] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3217] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <warn>  [1765275930.3218] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3222] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3226] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (5c4dfb5e-4fe6-4043-878d-08e89025bbc8)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3228] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3231] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3234] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3236] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3240] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <warn>  [1765275930.3241] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3244] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3249] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (733f9516-1b97-44d1-8a52-f240114e899e)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3250] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3252] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3255] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3257] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3260] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <warn>  [1765275930.3262] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3265] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3269] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (57931a83-d7e9-40c7-b5d5-0350f3cd0d8a)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3271] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3273] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3276] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3277] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3280] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3291] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3293] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3296] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3298] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3304] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3308] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3312] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3315] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3318] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3322] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3326] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3330] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3332] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3336] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3341] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3344] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3346] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3351] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 kernel: ovs-system: entered promiscuous mode
Dec 09 10:25:30 compute-0 systemd-udevd[59077]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3356] dhcp4 (eth0): canceled DHCP transaction
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3356] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3356] dhcp4 (eth0): state changed no lease
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3358] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3366] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3370] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59071 uid=0 result="fail" reason="Device is not activated"
Dec 09 10:25:30 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 09 10:25:30 compute-0 kernel: Timeout policy base is empty
Dec 09 10:25:30 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3851] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3857] dhcp4 (eth0): state changed new lease, address=38.102.83.201
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3864] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3875] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.3918] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 09 10:25:30 compute-0 kernel: br-ex: entered promiscuous mode
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4124] device (eth1): Activation: starting connection 'ci-private-network' (6b6a22e5-bf6a-510d-869a-e83c7a7cb57f)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4130] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4132] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4138] device (eth1): disconnecting for new activation request.
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4140] audit: op="connection-activate" uuid="6b6a22e5-bf6a-510d-869a-e83c7a7cb57f" name="ci-private-network" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4141] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4148] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4152] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4156] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4160] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4168] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4170] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4172] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4174] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4179] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4182] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4188] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4192] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4196] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 kernel: vlan22: entered promiscuous mode
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4201] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4205] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 systemd-udevd[59075]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4209] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4233] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59071 uid=0 result="success"
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4243] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4244] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4251] device (eth1): Activation: starting connection 'ci-private-network' (6b6a22e5-bf6a-510d-869a-e83c7a7cb57f)
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4259] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4263] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4275] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 kernel: vlan20: entered promiscuous mode
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4309] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4312] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4320] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4355] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 09 10:25:30 compute-0 kernel: vlan21: entered promiscuous mode
Dec 09 10:25:30 compute-0 systemd-udevd[59076]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4359] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4360] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4363] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4366] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4370] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4373] device (eth1): Activation: successful, device activated.
Dec 09 10:25:30 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4396] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4405] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4421] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4428] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4431] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4434] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4472] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4476] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4480] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4484] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4495] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4529] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4530] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 09 10:25:30 compute-0 NetworkManager[56302]: <info>  [1765275930.4534] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 09 10:25:31 compute-0 NetworkManager[56302]: <info>  [1765275931.5812] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59071 uid=0 result="success"
Dec 09 10:25:31 compute-0 NetworkManager[56302]: <info>  [1765275931.7383] checkpoint[0x55c704c44950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 09 10:25:31 compute-0 NetworkManager[56302]: <info>  [1765275931.7385] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59071 uid=0 result="success"
Dec 09 10:25:31 compute-0 sudo[59407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wizebylpfgeswtbnypbebkhfuvdsgcfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275931.32398-295-184280983687474/AnsiballZ_async_status.py'
Dec 09 10:25:31 compute-0 sudo[59407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:31 compute-0 python3.9[59409]: ansible-ansible.legacy.async_status Invoked with jid=j839379842523.59065 mode=status _async_dir=/root/.ansible_async
Dec 09 10:25:31 compute-0 sudo[59407]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:31 compute-0 NetworkManager[56302]: <info>  [1765275931.9931] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59071 uid=0 result="success"
Dec 09 10:25:31 compute-0 NetworkManager[56302]: <info>  [1765275931.9941] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59071 uid=0 result="success"
Dec 09 10:25:32 compute-0 NetworkManager[56302]: <info>  [1765275932.1966] audit: op="networking-control" arg="global-dns-configuration" pid=59071 uid=0 result="success"
Dec 09 10:25:32 compute-0 NetworkManager[56302]: <info>  [1765275932.2219] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 09 10:25:32 compute-0 NetworkManager[56302]: <info>  [1765275932.2253] audit: op="networking-control" arg="global-dns-configuration" pid=59071 uid=0 result="success"
Dec 09 10:25:32 compute-0 NetworkManager[56302]: <info>  [1765275932.2280] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59071 uid=0 result="success"
Dec 09 10:25:32 compute-0 NetworkManager[56302]: <info>  [1765275932.3647] checkpoint[0x55c704c44a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 09 10:25:32 compute-0 NetworkManager[56302]: <info>  [1765275932.3650] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59071 uid=0 result="success"
Dec 09 10:25:32 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 09 10:25:32 compute-0 ansible-async_wrapper.py[59069]: Module complete (59069)
Dec 09 10:25:33 compute-0 ansible-async_wrapper.py[59068]: Done in kid B.
Dec 09 10:25:34 compute-0 sshd-session[59419]: Invalid user admin from 159.223.8.217 port 58934
Dec 09 10:25:34 compute-0 sshd-session[59419]: Connection closed by invalid user admin 159.223.8.217 port 58934 [preauth]
Dec 09 10:25:35 compute-0 sudo[59517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhgvcqfcvdsrzzmqaqjoljnczlsctztb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275931.32398-295-184280983687474/AnsiballZ_async_status.py'
Dec 09 10:25:35 compute-0 sudo[59517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:35 compute-0 python3.9[59519]: ansible-ansible.legacy.async_status Invoked with jid=j839379842523.59065 mode=status _async_dir=/root/.ansible_async
Dec 09 10:25:35 compute-0 sudo[59517]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:35 compute-0 sudo[59617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atzcawvpwgolrnqmmqqarzmwoxgvcbcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275931.32398-295-184280983687474/AnsiballZ_async_status.py'
Dec 09 10:25:35 compute-0 sudo[59617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:35 compute-0 python3.9[59619]: ansible-ansible.legacy.async_status Invoked with jid=j839379842523.59065 mode=cleanup _async_dir=/root/.ansible_async
Dec 09 10:25:35 compute-0 sudo[59617]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:36 compute-0 sudo[59769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kctuoxisgwuchynhmkddecwnabapzqiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275936.1900833-322-230654113739781/AnsiballZ_stat.py'
Dec 09 10:25:36 compute-0 sudo[59769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:36 compute-0 python3.9[59771]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:25:36 compute-0 sudo[59769]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:37 compute-0 sudo[59892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xditpiwjdaduxpesxoxhwjmhpvlawkvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275936.1900833-322-230654113739781/AnsiballZ_copy.py'
Dec 09 10:25:37 compute-0 sudo[59892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:37 compute-0 python3.9[59894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765275936.1900833-322-230654113739781/.source.returncode _original_basename=.vy4ljc1c follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:25:37 compute-0 sudo[59892]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:37 compute-0 sudo[60044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhpcaenyymnxtgjqazprcrvywfzhxtbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275937.5417066-338-27347400406451/AnsiballZ_stat.py'
Dec 09 10:25:37 compute-0 sudo[60044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:38 compute-0 python3.9[60046]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:25:38 compute-0 sudo[60044]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:38 compute-0 sudo[60167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spxvkzchyysossgpqdhksekplpjsiukb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275937.5417066-338-27347400406451/AnsiballZ_copy.py'
Dec 09 10:25:38 compute-0 sudo[60167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:38 compute-0 python3.9[60169]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765275937.5417066-338-27347400406451/.source.cfg _original_basename=.p9m85jgh follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:25:38 compute-0 sudo[60167]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:38 compute-0 sudo[60320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljyyxvarhtzsdjzzdklepndrnqhfskpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275938.7456663-353-217499106879608/AnsiballZ_systemd.py'
Dec 09 10:25:38 compute-0 sudo[60320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:39 compute-0 python3.9[60322]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:25:39 compute-0 systemd[1]: Reloading Network Manager...
Dec 09 10:25:39 compute-0 NetworkManager[56302]: <info>  [1765275939.3571] audit: op="reload" arg="0" pid=60326 uid=0 result="success"
Dec 09 10:25:39 compute-0 NetworkManager[56302]: <info>  [1765275939.3581] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 09 10:25:39 compute-0 systemd[1]: Reloaded Network Manager.
Dec 09 10:25:39 compute-0 sudo[60320]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:39 compute-0 sshd-session[52302]: Connection closed by 192.168.122.30 port 58776
Dec 09 10:25:39 compute-0 sshd-session[52299]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:25:39 compute-0 systemd-logind[806]: Session 12 logged out. Waiting for processes to exit.
Dec 09 10:25:39 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Dec 09 10:25:39 compute-0 systemd[1]: session-12.scope: Consumed 50.907s CPU time.
Dec 09 10:25:39 compute-0 systemd-logind[806]: Removed session 12.
Dec 09 10:25:45 compute-0 sshd-session[60357]: Accepted publickey for zuul from 192.168.122.30 port 35666 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:25:45 compute-0 systemd-logind[806]: New session 13 of user zuul.
Dec 09 10:25:45 compute-0 systemd[1]: Started Session 13 of User zuul.
Dec 09 10:25:45 compute-0 sshd-session[60357]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:25:46 compute-0 python3.9[60510]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:25:47 compute-0 python3.9[60664]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 10:25:48 compute-0 python3.9[60853]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:25:48 compute-0 sshd-session[60360]: Connection closed by 192.168.122.30 port 35666
Dec 09 10:25:48 compute-0 sshd-session[60357]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:25:48 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Dec 09 10:25:48 compute-0 systemd[1]: session-13.scope: Consumed 2.364s CPU time.
Dec 09 10:25:48 compute-0 systemd-logind[806]: Session 13 logged out. Waiting for processes to exit.
Dec 09 10:25:48 compute-0 systemd-logind[806]: Removed session 13.
Dec 09 10:25:49 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 09 10:25:54 compute-0 sshd-session[60883]: Accepted publickey for zuul from 192.168.122.30 port 45538 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:25:54 compute-0 systemd-logind[806]: New session 14 of user zuul.
Dec 09 10:25:54 compute-0 systemd[1]: Started Session 14 of User zuul.
Dec 09 10:25:54 compute-0 sshd-session[60883]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:25:55 compute-0 python3.9[61036]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:25:56 compute-0 python3.9[61190]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:25:57 compute-0 sudo[61344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujhsvtqbxakfvjohbdxkdqdwzxprmgiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275957.235077-40-159634947484298/AnsiballZ_setup.py'
Dec 09 10:25:57 compute-0 sudo[61344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:58 compute-0 python3.9[61346]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 10:25:58 compute-0 sudo[61344]: pam_unix(sudo:session): session closed for user root
Dec 09 10:25:58 compute-0 sudo[61429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldefmdxwmvmevbsqltsdtfnlxaaquezr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275957.235077-40-159634947484298/AnsiballZ_dnf.py'
Dec 09 10:25:58 compute-0 sudo[61429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:25:58 compute-0 python3.9[61431]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 10:26:00 compute-0 sudo[61429]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:00 compute-0 sudo[61582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivhwdgkbbqyvftffjssiwpysopqfrqjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275960.4435196-52-45695767867989/AnsiballZ_setup.py'
Dec 09 10:26:00 compute-0 sudo[61582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:01 compute-0 python3.9[61584]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 10:26:01 compute-0 sudo[61582]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:01 compute-0 sudo[61774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkwybnixbnrdlchddqkmpyiruvabenuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275961.4583442-63-151888827879792/AnsiballZ_file.py'
Dec 09 10:26:01 compute-0 sudo[61774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:02 compute-0 python3.9[61776]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:26:02 compute-0 sudo[61774]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:02 compute-0 sudo[61926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqbbjpenncqrixlamnsypbuimccjnhsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275962.3319976-71-150986737977462/AnsiballZ_command.py'
Dec 09 10:26:02 compute-0 sudo[61926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:02 compute-0 python3.9[61928]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:26:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:26:03 compute-0 sudo[61926]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:03 compute-0 sudo[62088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jirydkkktvezzfnvolcqqcvhqajenhlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275963.160668-79-62865178085432/AnsiballZ_stat.py'
Dec 09 10:26:03 compute-0 sudo[62088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:03 compute-0 python3.9[62090]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:26:03 compute-0 sudo[62088]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:04 compute-0 sudo[62166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eubhdrqhszcmjpxopuoiwrskbdjrbxxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275963.160668-79-62865178085432/AnsiballZ_file.py'
Dec 09 10:26:04 compute-0 sudo[62166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:04 compute-0 python3.9[62168]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:26:04 compute-0 sudo[62166]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:04 compute-0 sudo[62318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxlyevhjbqsxgchiayrszxwhttrepdkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275964.455848-91-87462276304782/AnsiballZ_stat.py'
Dec 09 10:26:04 compute-0 sudo[62318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:04 compute-0 python3.9[62320]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:26:04 compute-0 sudo[62318]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:05 compute-0 sudo[62396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjbkznhwfommocxsdslwjshsoznwoshq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275964.455848-91-87462276304782/AnsiballZ_file.py'
Dec 09 10:26:05 compute-0 sudo[62396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:05 compute-0 python3.9[62398]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:26:05 compute-0 sudo[62396]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:05 compute-0 sudo[62548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrddaqqftapzyjbtpyjcswjtlfgpwmrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275965.5522475-104-256691558935080/AnsiballZ_ini_file.py'
Dec 09 10:26:05 compute-0 sudo[62548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:06 compute-0 python3.9[62550]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:26:06 compute-0 sudo[62548]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:06 compute-0 sudo[62700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugzeydutgyllgoltxbsksxwqgthtnyte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275966.3054526-104-110723975129506/AnsiballZ_ini_file.py'
Dec 09 10:26:06 compute-0 sudo[62700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:06 compute-0 python3.9[62702]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:26:06 compute-0 sudo[62700]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:07 compute-0 sudo[62852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krsxoedrccloelkdiybvvyggpsfkbvbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275967.0454705-104-65760176554999/AnsiballZ_ini_file.py'
Dec 09 10:26:07 compute-0 sudo[62852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:07 compute-0 python3.9[62854]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:26:07 compute-0 sudo[62852]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:07 compute-0 sudo[63004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmjzxuahpwvgmmcwdxvgcfgwnpkvcjzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275967.633212-104-184020559143987/AnsiballZ_ini_file.py'
Dec 09 10:26:07 compute-0 sudo[63004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:08 compute-0 python3.9[63006]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:26:08 compute-0 sudo[63004]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:08 compute-0 sudo[63156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uluaoyrukffbambnklszofdwkmrhogva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275968.3527167-135-274732139236729/AnsiballZ_dnf.py'
Dec 09 10:26:08 compute-0 sudo[63156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:08 compute-0 python3.9[63158]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 10:26:10 compute-0 sudo[63156]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:11 compute-0 sudo[63311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-punrcrhkdhkrzemdmslktatgaqoqvmnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275970.8713577-146-118429102951293/AnsiballZ_setup.py'
Dec 09 10:26:11 compute-0 sudo[63311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:11 compute-0 sshd-session[63184]: Invalid user admin from 159.223.8.217 port 39170
Dec 09 10:26:11 compute-0 sshd-session[63184]: Connection closed by invalid user admin 159.223.8.217 port 39170 [preauth]
Dec 09 10:26:11 compute-0 python3.9[63313]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:26:11 compute-0 sudo[63311]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:11 compute-0 sudo[63465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fltvkpwjxwfniylqsggnxyjhgltuutew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275971.7190983-154-65007740077940/AnsiballZ_stat.py'
Dec 09 10:26:11 compute-0 sudo[63465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:12 compute-0 python3.9[63467]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:26:12 compute-0 sudo[63465]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:12 compute-0 sudo[63617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygdrhlfnzzhgiygzdtortasattzbnczx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275972.5482094-163-52688829984737/AnsiballZ_stat.py'
Dec 09 10:26:12 compute-0 sudo[63617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:13 compute-0 python3.9[63619]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:26:13 compute-0 sudo[63617]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:13 compute-0 sudo[63769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlpoffbvpxbzhedytcuzqdlbkaedpkcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275973.2784986-173-230990409405690/AnsiballZ_command.py'
Dec 09 10:26:13 compute-0 sudo[63769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:13 compute-0 python3.9[63771]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:26:13 compute-0 sudo[63769]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:14 compute-0 sudo[63922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfucebmkrdpsgywctvyykmzssryfduhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275973.986161-183-190359584393465/AnsiballZ_service_facts.py'
Dec 09 10:26:14 compute-0 sudo[63922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:14 compute-0 python3.9[63924]: ansible-service_facts Invoked
Dec 09 10:26:14 compute-0 network[63941]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 10:26:14 compute-0 network[63942]: 'network-scripts' will be removed from distribution in near future.
Dec 09 10:26:14 compute-0 network[63943]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 10:26:18 compute-0 sudo[63922]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:19 compute-0 sudo[64226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewzsmlmhiarwflxjzggcnrcnjhwkycxj ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765275978.9402163-198-135819378674786/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765275978.9402163-198-135819378674786/args'
Dec 09 10:26:19 compute-0 sudo[64226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:19 compute-0 sudo[64226]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:19 compute-0 sudo[64393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrijkogrgcmsgktsghntfuzsssxqfjsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275979.641676-209-233596635642922/AnsiballZ_dnf.py'
Dec 09 10:26:19 compute-0 sudo[64393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:20 compute-0 python3.9[64395]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 10:26:21 compute-0 sudo[64393]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:22 compute-0 sudo[64546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-valgzhcofwyijgrcveqryjosokcrocwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275981.7024856-222-118737401562974/AnsiballZ_package_facts.py'
Dec 09 10:26:22 compute-0 sudo[64546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:22 compute-0 python3.9[64548]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 09 10:26:22 compute-0 sudo[64546]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:23 compute-0 sudo[64698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jssczihdiysucczwitotcqouthwhybjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275983.408193-232-221091931822936/AnsiballZ_stat.py'
Dec 09 10:26:23 compute-0 sudo[64698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:23 compute-0 python3.9[64700]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:26:23 compute-0 sudo[64698]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:24 compute-0 sudo[64823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxghbjtadpciwhdwctkgwpmnvrvquoan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275983.408193-232-221091931822936/AnsiballZ_copy.py'
Dec 09 10:26:24 compute-0 sudo[64823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:24 compute-0 python3.9[64825]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765275983.408193-232-221091931822936/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:26:24 compute-0 sudo[64823]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:25 compute-0 sudo[64977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjqikaslvxhnxgkbznbgevsxganbnmbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275984.9192743-247-70682147291956/AnsiballZ_stat.py'
Dec 09 10:26:25 compute-0 sudo[64977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:25 compute-0 python3.9[64979]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:26:25 compute-0 sudo[64977]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:25 compute-0 sudo[65102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glexqifofhpxqmejtzjukgmtjwpllyuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275984.9192743-247-70682147291956/AnsiballZ_copy.py'
Dec 09 10:26:25 compute-0 sudo[65102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:26 compute-0 python3.9[65104]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765275984.9192743-247-70682147291956/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:26:26 compute-0 sudo[65102]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:27 compute-0 sudo[65256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plgqynhmpzytkleoniuwmfgoqkijexkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275986.683866-268-185324634451589/AnsiballZ_lineinfile.py'
Dec 09 10:26:27 compute-0 sudo[65256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:27 compute-0 python3.9[65258]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:26:27 compute-0 sudo[65256]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:28 compute-0 sudo[65410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nubgumljqurirgukjitalkohbnxjicnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275988.1154292-283-218968951815164/AnsiballZ_setup.py'
Dec 09 10:26:28 compute-0 sudo[65410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:28 compute-0 python3.9[65412]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 10:26:28 compute-0 sudo[65410]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:29 compute-0 sudo[65494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndcgtmpwzauqlmtggwzqofecxujvunni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275988.1154292-283-218968951815164/AnsiballZ_systemd.py'
Dec 09 10:26:29 compute-0 sudo[65494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:30 compute-0 python3.9[65496]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:26:30 compute-0 sudo[65494]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:31 compute-0 sudo[65648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usdvnayvogujntzdcpfdcevnegekzkuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275991.1164367-299-46972300855610/AnsiballZ_setup.py'
Dec 09 10:26:31 compute-0 sudo[65648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:31 compute-0 python3.9[65650]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 10:26:31 compute-0 sudo[65648]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:32 compute-0 sudo[65732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adrxuremjmbddnfzenotjqhzhxbuopnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765275991.1164367-299-46972300855610/AnsiballZ_systemd.py'
Dec 09 10:26:32 compute-0 sudo[65732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:33 compute-0 python3.9[65734]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:26:33 compute-0 chronyd[784]: chronyd exiting
Dec 09 10:26:33 compute-0 systemd[1]: Stopping NTP client/server...
Dec 09 10:26:33 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Dec 09 10:26:33 compute-0 systemd[1]: Stopped NTP client/server.
Dec 09 10:26:33 compute-0 systemd[1]: Starting NTP client/server...
Dec 09 10:26:33 compute-0 chronyd[65742]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 09 10:26:33 compute-0 chronyd[65742]: Frequency -26.722 +/- 0.432 ppm read from /var/lib/chrony/drift
Dec 09 10:26:33 compute-0 chronyd[65742]: Loaded seccomp filter (level 2)
Dec 09 10:26:33 compute-0 systemd[1]: Started NTP client/server.
Dec 09 10:26:33 compute-0 sudo[65732]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:33 compute-0 sshd-session[60886]: Connection closed by 192.168.122.30 port 45538
Dec 09 10:26:33 compute-0 sshd-session[60883]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:26:33 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Dec 09 10:26:33 compute-0 systemd[1]: session-14.scope: Consumed 25.324s CPU time.
Dec 09 10:26:33 compute-0 systemd-logind[806]: Session 14 logged out. Waiting for processes to exit.
Dec 09 10:26:33 compute-0 systemd-logind[806]: Removed session 14.
Dec 09 10:26:39 compute-0 sshd-session[65768]: Accepted publickey for zuul from 192.168.122.30 port 48500 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:26:39 compute-0 systemd-logind[806]: New session 15 of user zuul.
Dec 09 10:26:39 compute-0 systemd[1]: Started Session 15 of User zuul.
Dec 09 10:26:39 compute-0 sshd-session[65768]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:26:40 compute-0 python3.9[65921]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:26:41 compute-0 sudo[66075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txabkhatbhwdqfkxkvocqqiimlaztdkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276000.6199627-33-6596620205461/AnsiballZ_file.py'
Dec 09 10:26:41 compute-0 sudo[66075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:41 compute-0 python3.9[66077]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:26:41 compute-0 sudo[66075]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:41 compute-0 sudo[66250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npxrcgppltswyrnnxdnsmbekysdadfwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276001.4445865-41-113364023999345/AnsiballZ_stat.py'
Dec 09 10:26:41 compute-0 sudo[66250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:42 compute-0 python3.9[66252]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:26:42 compute-0 sudo[66250]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:42 compute-0 sudo[66328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bakpgxayjdqtcqfkdvcrebgeivfiuxgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276001.4445865-41-113364023999345/AnsiballZ_file.py'
Dec 09 10:26:42 compute-0 sudo[66328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:42 compute-0 python3.9[66330]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.umz3syzo recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:26:42 compute-0 sudo[66328]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:43 compute-0 sudo[66480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrgqiigbclpscvaihhydehwjeryslcqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276002.892835-61-175142638633980/AnsiballZ_stat.py'
Dec 09 10:26:43 compute-0 sudo[66480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:43 compute-0 python3.9[66482]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:26:43 compute-0 sudo[66480]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:44 compute-0 sudo[66603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjocndrxvocksqgmjsoxmjrvkvpkrqpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276002.892835-61-175142638633980/AnsiballZ_copy.py'
Dec 09 10:26:44 compute-0 sudo[66603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:44 compute-0 python3.9[66605]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276002.892835-61-175142638633980/.source _original_basename=.bl53kjju follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:26:44 compute-0 sudo[66603]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:45 compute-0 sudo[66755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crjlbxrkhqxtuemuoquigylynzrsskio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276004.7126398-77-161017808062102/AnsiballZ_file.py'
Dec 09 10:26:45 compute-0 sudo[66755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:45 compute-0 python3.9[66757]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:26:45 compute-0 sudo[66755]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:45 compute-0 sudo[66909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kavdmsqkbrrmvrjoudvuveizkzqdpbdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276005.5177836-85-4183505911027/AnsiballZ_stat.py'
Dec 09 10:26:45 compute-0 sudo[66909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:45 compute-0 sshd-session[66758]: Received disconnect from 80.94.93.233 port 42620:11:  [preauth]
Dec 09 10:26:45 compute-0 sshd-session[66758]: Disconnected from authenticating user root 80.94.93.233 port 42620 [preauth]
Dec 09 10:26:46 compute-0 python3.9[66911]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:26:46 compute-0 sudo[66909]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:46 compute-0 sudo[67032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfuzeyjgduflyqtjaxaxdoofvqapsncj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276005.5177836-85-4183505911027/AnsiballZ_copy.py'
Dec 09 10:26:46 compute-0 sudo[67032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:46 compute-0 python3.9[67034]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276005.5177836-85-4183505911027/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:26:46 compute-0 sudo[67032]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:46 compute-0 sshd-session[67035]: Invalid user admin from 159.223.8.217 port 59018
Dec 09 10:26:47 compute-0 sshd-session[67035]: Connection closed by invalid user admin 159.223.8.217 port 59018 [preauth]
Dec 09 10:26:47 compute-0 sudo[67186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfhkjpssornzvnyddmybryohintzozhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276006.8393881-85-195841845680834/AnsiballZ_stat.py'
Dec 09 10:26:47 compute-0 sudo[67186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:47 compute-0 python3.9[67188]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:26:47 compute-0 sudo[67186]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:47 compute-0 sudo[67309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fawghfxgalgvdrmcakrfcucfpuryngdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276006.8393881-85-195841845680834/AnsiballZ_copy.py'
Dec 09 10:26:47 compute-0 sudo[67309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:47 compute-0 python3.9[67311]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276006.8393881-85-195841845680834/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:26:48 compute-0 sudo[67309]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:48 compute-0 sudo[67461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcywqplmaffnyorhllxetklkidjefela ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276008.1881187-114-119669771140874/AnsiballZ_file.py'
Dec 09 10:26:48 compute-0 sudo[67461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:48 compute-0 python3.9[67463]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:26:48 compute-0 sudo[67461]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:49 compute-0 sudo[67613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmzygctyaqgccsetgslzpuiqpeqnfjmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276009.0655894-122-161973645946222/AnsiballZ_stat.py'
Dec 09 10:26:49 compute-0 sudo[67613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:49 compute-0 python3.9[67615]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:26:49 compute-0 sudo[67613]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:49 compute-0 sudo[67736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swocrwihavyqqazoddzfucvjdrdnkgho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276009.0655894-122-161973645946222/AnsiballZ_copy.py'
Dec 09 10:26:49 compute-0 sudo[67736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:50 compute-0 python3.9[67738]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276009.0655894-122-161973645946222/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:26:50 compute-0 sudo[67736]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:50 compute-0 sudo[67888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjnxpdgtdzqrkhuphekvbnaycwzrzjqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276010.248644-137-197946848426022/AnsiballZ_stat.py'
Dec 09 10:26:50 compute-0 sudo[67888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:50 compute-0 python3.9[67890]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:26:50 compute-0 sudo[67888]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:51 compute-0 sudo[68011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nktbuetinsoepqkpswheognrousedphk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276010.248644-137-197946848426022/AnsiballZ_copy.py'
Dec 09 10:26:51 compute-0 sudo[68011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:51 compute-0 python3.9[68013]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276010.248644-137-197946848426022/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:26:51 compute-0 sudo[68011]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:52 compute-0 sudo[68163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eldpqijufxuamnzwdehjnaazxxcctqqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276011.461313-152-256859140423862/AnsiballZ_systemd.py'
Dec 09 10:26:52 compute-0 sudo[68163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:52 compute-0 python3.9[68165]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:26:52 compute-0 systemd[1]: Reloading.
Dec 09 10:26:52 compute-0 systemd-rc-local-generator[68193]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:26:52 compute-0 systemd-sysv-generator[68196]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:26:52 compute-0 systemd[1]: Reloading.
Dec 09 10:26:52 compute-0 systemd-rc-local-generator[68226]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:26:52 compute-0 systemd-sysv-generator[68229]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:26:52 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Dec 09 10:26:52 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Dec 09 10:26:52 compute-0 sudo[68163]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:53 compute-0 sudo[68390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axoxszatsefsfuzywifiqknpollyufce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276013.0977483-160-193175974536045/AnsiballZ_stat.py'
Dec 09 10:26:53 compute-0 sudo[68390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:53 compute-0 python3.9[68392]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:26:53 compute-0 sudo[68390]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:53 compute-0 sudo[68513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvgyrkdtkdxwaojmjmydgcnpholiectn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276013.0977483-160-193175974536045/AnsiballZ_copy.py'
Dec 09 10:26:53 compute-0 sudo[68513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:54 compute-0 python3.9[68515]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276013.0977483-160-193175974536045/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:26:54 compute-0 sudo[68513]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:54 compute-0 sudo[68665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypqqntowdbxprlsmemumobnudgrrbwbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276014.360563-175-144356894532027/AnsiballZ_stat.py'
Dec 09 10:26:54 compute-0 sudo[68665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:54 compute-0 python3.9[68667]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:26:54 compute-0 sudo[68665]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:55 compute-0 sudo[68788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igfzzlzlmqfvlfeopgrfdubjbybznrav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276014.360563-175-144356894532027/AnsiballZ_copy.py'
Dec 09 10:26:55 compute-0 sudo[68788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:55 compute-0 python3.9[68790]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276014.360563-175-144356894532027/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:26:55 compute-0 sudo[68788]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:55 compute-0 sudo[68940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odgjoaxetgmtjwrhwqzrlekkpxvgutpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276015.589646-190-192259996180240/AnsiballZ_systemd.py'
Dec 09 10:26:55 compute-0 sudo[68940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:26:56 compute-0 python3.9[68942]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:26:56 compute-0 systemd[1]: Reloading.
Dec 09 10:26:56 compute-0 systemd-sysv-generator[68972]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:26:56 compute-0 systemd-rc-local-generator[68967]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:26:56 compute-0 systemd[1]: Reloading.
Dec 09 10:26:56 compute-0 systemd-rc-local-generator[69004]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:26:56 compute-0 systemd-sysv-generator[69011]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:26:56 compute-0 systemd[1]: Starting Create netns directory...
Dec 09 10:26:56 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 09 10:26:56 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 09 10:26:56 compute-0 systemd[1]: Finished Create netns directory.
Dec 09 10:26:56 compute-0 sudo[68940]: pam_unix(sudo:session): session closed for user root
Dec 09 10:26:57 compute-0 python3.9[69167]: ansible-ansible.builtin.service_facts Invoked
Dec 09 10:26:57 compute-0 network[69184]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 10:26:57 compute-0 network[69185]: 'network-scripts' will be removed from distribution in near future.
Dec 09 10:26:57 compute-0 network[69186]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 10:27:01 compute-0 sudo[69446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrihubcoozywcnhasvabedxlbvyaupxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276021.0893745-206-122419958752420/AnsiballZ_systemd.py'
Dec 09 10:27:01 compute-0 sudo[69446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:01 compute-0 python3.9[69448]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:27:01 compute-0 systemd[1]: Reloading.
Dec 09 10:27:01 compute-0 systemd-rc-local-generator[69474]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:27:01 compute-0 systemd-sysv-generator[69480]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:27:02 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 09 10:27:02 compute-0 iptables.init[69487]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 09 10:27:02 compute-0 iptables.init[69487]: iptables: Flushing firewall rules: [  OK  ]
Dec 09 10:27:02 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Dec 09 10:27:02 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 09 10:27:02 compute-0 sudo[69446]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:02 compute-0 sudo[69681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doracoxptoyqbglysxkjclkdkjqiebxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276022.6195652-206-278156484785585/AnsiballZ_systemd.py'
Dec 09 10:27:02 compute-0 sudo[69681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:03 compute-0 python3.9[69683]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:27:03 compute-0 sudo[69681]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:03 compute-0 sudo[69835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohxjjdsmploarnowojejvrbtdqklueix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276023.552293-222-113026307168572/AnsiballZ_systemd.py'
Dec 09 10:27:03 compute-0 sudo[69835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:04 compute-0 python3.9[69837]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:27:04 compute-0 systemd[1]: Reloading.
Dec 09 10:27:04 compute-0 systemd-rc-local-generator[69868]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:27:04 compute-0 systemd-sysv-generator[69873]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:27:04 compute-0 systemd[1]: Starting Netfilter Tables...
Dec 09 10:27:04 compute-0 systemd[1]: Finished Netfilter Tables.
Dec 09 10:27:04 compute-0 sudo[69835]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:05 compute-0 sudo[70028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grplqnlrtdqqxxejohlpayjbqjrrlhlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276024.7004683-230-2183513046054/AnsiballZ_command.py'
Dec 09 10:27:05 compute-0 sudo[70028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:05 compute-0 python3.9[70030]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:27:05 compute-0 sudo[70028]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:06 compute-0 sudo[70181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tknkegbyzmtdlalmrpuewjiniztrjdfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276025.7315059-244-196412974802121/AnsiballZ_stat.py'
Dec 09 10:27:06 compute-0 sudo[70181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:06 compute-0 python3.9[70183]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:27:06 compute-0 sudo[70181]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:06 compute-0 sudo[70306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eowmibnkvgeuddrwwguvmlgewmdgzdxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276025.7315059-244-196412974802121/AnsiballZ_copy.py'
Dec 09 10:27:06 compute-0 sudo[70306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:06 compute-0 python3.9[70308]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276025.7315059-244-196412974802121/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:06 compute-0 sudo[70306]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:07 compute-0 sudo[70459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgtluoizqzhcsyfygbmeeqojtjumfhah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276026.9767346-259-181161576758713/AnsiballZ_systemd.py'
Dec 09 10:27:07 compute-0 sudo[70459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:07 compute-0 python3.9[70461]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:27:07 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Dec 09 10:27:07 compute-0 sshd[1007]: Received SIGHUP; restarting.
Dec 09 10:27:07 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Dec 09 10:27:07 compute-0 sshd[1007]: Server listening on 0.0.0.0 port 22.
Dec 09 10:27:07 compute-0 sshd[1007]: Server listening on :: port 22.
Dec 09 10:27:07 compute-0 sudo[70459]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:08 compute-0 sudo[70615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvgtqmmhmgiwekyfprtaqgbwvmttnrkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276027.8679476-267-129590254192145/AnsiballZ_file.py'
Dec 09 10:27:08 compute-0 sudo[70615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:08 compute-0 python3.9[70617]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:08 compute-0 sudo[70615]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:09 compute-0 sudo[70767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zywhzhvrpkfbyeymhzsvewmhszzvbfvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276028.680314-275-277798178836321/AnsiballZ_stat.py'
Dec 09 10:27:09 compute-0 sudo[70767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:09 compute-0 python3.9[70769]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:27:09 compute-0 sudo[70767]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:09 compute-0 sudo[70890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apjkowykjvxndkyjeqdawpkujacqgtxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276028.680314-275-277798178836321/AnsiballZ_copy.py'
Dec 09 10:27:09 compute-0 sudo[70890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:09 compute-0 python3.9[70892]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276028.680314-275-277798178836321/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:09 compute-0 sudo[70890]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:10 compute-0 sudo[71042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgehoxwpwhahfiocmgqnyuufnibpxarf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276030.1841118-293-128414987003401/AnsiballZ_timezone.py'
Dec 09 10:27:10 compute-0 sudo[71042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:10 compute-0 python3.9[71044]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 09 10:27:10 compute-0 systemd[1]: Starting Time & Date Service...
Dec 09 10:27:10 compute-0 systemd[1]: Started Time & Date Service.
Dec 09 10:27:11 compute-0 sudo[71042]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:11 compute-0 sudo[71198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vixiiqaliusgumefnplktfbjkycbzrdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276031.2535355-302-184482262489963/AnsiballZ_file.py'
Dec 09 10:27:11 compute-0 sudo[71198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:11 compute-0 python3.9[71200]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:11 compute-0 sudo[71198]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:12 compute-0 sudo[71350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxpdavgnewtmwaamccdpfuwghdrjofdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276031.9062324-310-182780049426286/AnsiballZ_stat.py'
Dec 09 10:27:12 compute-0 sudo[71350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:12 compute-0 python3.9[71352]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:27:12 compute-0 sudo[71350]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:12 compute-0 sudo[71473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gufknimppxzfvduahutjpvbnbykucpic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276031.9062324-310-182780049426286/AnsiballZ_copy.py'
Dec 09 10:27:12 compute-0 sudo[71473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:13 compute-0 python3.9[71475]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276031.9062324-310-182780049426286/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:13 compute-0 sudo[71473]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:13 compute-0 sudo[71625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkwetncgccrjdtmwcjcixjgvccyudjyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276033.3221977-325-191569428257163/AnsiballZ_stat.py'
Dec 09 10:27:13 compute-0 sudo[71625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:13 compute-0 python3.9[71627]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:27:13 compute-0 sudo[71625]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:14 compute-0 sudo[71748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwbunvzwlylozqprxhpvtewsoovlnqci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276033.3221977-325-191569428257163/AnsiballZ_copy.py'
Dec 09 10:27:14 compute-0 sudo[71748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:14 compute-0 python3.9[71750]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276033.3221977-325-191569428257163/.source.yaml _original_basename=.v24a2u3p follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:14 compute-0 sudo[71748]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:14 compute-0 sudo[71900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erxzwwrobqxiwcnjmttopztetvyykmie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276034.5696132-340-267879395588508/AnsiballZ_stat.py'
Dec 09 10:27:14 compute-0 sudo[71900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:15 compute-0 python3.9[71902]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:27:15 compute-0 sudo[71900]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:15 compute-0 sudo[72023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdilvbpsbxmkiwpunyhnxcjnddztetqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276034.5696132-340-267879395588508/AnsiballZ_copy.py'
Dec 09 10:27:15 compute-0 sudo[72023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:15 compute-0 python3.9[72025]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276034.5696132-340-267879395588508/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:15 compute-0 sudo[72023]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:16 compute-0 sudo[72175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtreraahsuianenfvpzmiydmxfepqzsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276035.8772004-355-185466728719640/AnsiballZ_command.py'
Dec 09 10:27:16 compute-0 sudo[72175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:16 compute-0 python3.9[72177]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:27:16 compute-0 sudo[72175]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:16 compute-0 sudo[72328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyiiijzkdnkhjfctlcpspqdtumhgsfjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276036.595078-363-252025259247117/AnsiballZ_command.py'
Dec 09 10:27:16 compute-0 sudo[72328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:17 compute-0 python3.9[72330]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:27:17 compute-0 sudo[72328]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:17 compute-0 sudo[72481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuftvdimemxvfklthoupzgkjdlgckcib ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765276037.1956692-371-40268794110488/AnsiballZ_edpm_nftables_from_files.py'
Dec 09 10:27:17 compute-0 sudo[72481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:17 compute-0 python3[72483]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 09 10:27:17 compute-0 sudo[72481]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:18 compute-0 sudo[72633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uywcrsaredgdqfilhsexcqlkavtndblw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276038.1512318-379-132021651288435/AnsiballZ_stat.py'
Dec 09 10:27:18 compute-0 sudo[72633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:18 compute-0 python3.9[72635]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:27:18 compute-0 sudo[72633]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:19 compute-0 sudo[72756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gphcsmnsjmeflcqhklvwwdbflflkglkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276038.1512318-379-132021651288435/AnsiballZ_copy.py'
Dec 09 10:27:19 compute-0 sudo[72756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:19 compute-0 python3.9[72758]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276038.1512318-379-132021651288435/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:19 compute-0 sudo[72756]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:19 compute-0 sudo[72908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmpfdryvwkvfuzodgtbduojalcczueui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276039.3645303-394-224266804531636/AnsiballZ_stat.py'
Dec 09 10:27:19 compute-0 sudo[72908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:19 compute-0 python3.9[72910]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:27:19 compute-0 sudo[72908]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:20 compute-0 sudo[73031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqhneocyzmhvrqmjuwtxjngrsofcupcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276039.3645303-394-224266804531636/AnsiballZ_copy.py'
Dec 09 10:27:20 compute-0 sudo[73031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:20 compute-0 python3.9[73033]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276039.3645303-394-224266804531636/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:20 compute-0 sudo[73031]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:20 compute-0 sudo[73183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwyeyxywfmfdijnkpajasxqqhbjuinmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276040.6447146-409-57894465429923/AnsiballZ_stat.py'
Dec 09 10:27:20 compute-0 sudo[73183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:21 compute-0 python3.9[73185]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:27:21 compute-0 sudo[73183]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:21 compute-0 sudo[73308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sthfqlrhnzapvegxaolsvgswusndcnvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276040.6447146-409-57894465429923/AnsiballZ_copy.py'
Dec 09 10:27:21 compute-0 sudo[73308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:21 compute-0 sshd-session[73186]: Invalid user admin from 159.223.8.217 port 46842
Dec 09 10:27:21 compute-0 python3.9[73310]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276040.6447146-409-57894465429923/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:21 compute-0 sudo[73308]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:21 compute-0 sshd-session[73186]: Connection closed by invalid user admin 159.223.8.217 port 46842 [preauth]
Dec 09 10:27:22 compute-0 sudo[73460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnmfrhglgbharouaptsufmdrwwwjsaup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276042.0588565-424-261765440124856/AnsiballZ_stat.py'
Dec 09 10:27:22 compute-0 sudo[73460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:22 compute-0 python3.9[73462]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:27:22 compute-0 sudo[73460]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:22 compute-0 sudo[73583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbttwaiatfkzrpoqpferpjicvubhkxvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276042.0588565-424-261765440124856/AnsiballZ_copy.py'
Dec 09 10:27:22 compute-0 sudo[73583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:23 compute-0 python3.9[73585]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276042.0588565-424-261765440124856/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:23 compute-0 sudo[73583]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:23 compute-0 sudo[73735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arthmbfsmuncnhwhjusqxuieeqktytil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276043.293206-439-259977886276396/AnsiballZ_stat.py'
Dec 09 10:27:23 compute-0 sudo[73735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:23 compute-0 python3.9[73737]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:27:23 compute-0 sudo[73735]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:24 compute-0 sudo[73858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzugwrzhtnqfxkbvsuilyhnjrcwmoali ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276043.293206-439-259977886276396/AnsiballZ_copy.py'
Dec 09 10:27:24 compute-0 sudo[73858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:24 compute-0 python3.9[73860]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276043.293206-439-259977886276396/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:24 compute-0 sudo[73858]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:24 compute-0 sudo[74010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoshodbinrmnmtmhcvratwlvkdtgqxue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276044.5573058-454-39607302738631/AnsiballZ_file.py'
Dec 09 10:27:24 compute-0 sudo[74010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:25 compute-0 python3.9[74012]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:25 compute-0 sudo[74010]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:25 compute-0 sudo[74162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjsqgbakobiyatdzawttqekassgbqzkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276045.2660897-462-188322046849602/AnsiballZ_command.py'
Dec 09 10:27:25 compute-0 sudo[74162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:25 compute-0 python3.9[74164]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:27:25 compute-0 sudo[74162]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:26 compute-0 sudo[74321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgmqdmgxfslplzrkkguairwcyyyuoirb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276046.2368257-470-251844836539097/AnsiballZ_blockinfile.py'
Dec 09 10:27:26 compute-0 sudo[74321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:26 compute-0 python3.9[74323]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:27 compute-0 sudo[74321]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:27 compute-0 sudo[74474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbsoougllihkivirpcjhgltyjnxpnjxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276047.218697-479-216755070743250/AnsiballZ_file.py'
Dec 09 10:27:27 compute-0 sudo[74474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:27 compute-0 python3.9[74476]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:27 compute-0 sudo[74474]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:28 compute-0 sudo[74626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-istmnuyubhiuyqboasujxftccqzsrhux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276047.864771-479-177169017173522/AnsiballZ_file.py'
Dec 09 10:27:28 compute-0 sudo[74626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:28 compute-0 python3.9[74628]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:28 compute-0 sudo[74626]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:29 compute-0 sudo[74778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghtkzzesbkuwzjcleyczojvytitnxuyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276048.5570424-494-178006567833005/AnsiballZ_mount.py'
Dec 09 10:27:29 compute-0 sudo[74778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:29 compute-0 python3.9[74780]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 09 10:27:29 compute-0 sudo[74778]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:29 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 10:27:29 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 10:27:29 compute-0 sudo[74932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrmnadlpzjopnnuoumxqphljtaldqhgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276049.4624407-494-81782223854364/AnsiballZ_mount.py'
Dec 09 10:27:29 compute-0 sudo[74932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:29 compute-0 python3.9[74934]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 09 10:27:29 compute-0 sudo[74932]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:30 compute-0 sshd-session[65771]: Connection closed by 192.168.122.30 port 48500
Dec 09 10:27:30 compute-0 sshd-session[65768]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:27:30 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Dec 09 10:27:30 compute-0 systemd[1]: session-15.scope: Consumed 37.602s CPU time.
Dec 09 10:27:30 compute-0 systemd-logind[806]: Session 15 logged out. Waiting for processes to exit.
Dec 09 10:27:30 compute-0 systemd-logind[806]: Removed session 15.
Dec 09 10:27:35 compute-0 sshd-session[74960]: Accepted publickey for zuul from 192.168.122.30 port 44690 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:27:35 compute-0 systemd-logind[806]: New session 16 of user zuul.
Dec 09 10:27:35 compute-0 systemd[1]: Started Session 16 of User zuul.
Dec 09 10:27:35 compute-0 sshd-session[74960]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:27:36 compute-0 sudo[75113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdyxhejkndbcnavaqfnjphiliqofzyuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276055.5460207-16-264218242306888/AnsiballZ_tempfile.py'
Dec 09 10:27:36 compute-0 sudo[75113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:36 compute-0 python3.9[75115]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 09 10:27:36 compute-0 sudo[75113]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:36 compute-0 sudo[75265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syaorwenkgkefmnkkyfndchfavjexijx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276056.4423935-28-225503443988276/AnsiballZ_stat.py'
Dec 09 10:27:36 compute-0 sudo[75265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:37 compute-0 python3.9[75267]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:27:37 compute-0 sudo[75265]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:37 compute-0 sudo[75417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxclltdxodaarxyomciqcizwjlrpqafl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276057.3360105-38-154840627103177/AnsiballZ_setup.py'
Dec 09 10:27:37 compute-0 sudo[75417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:38 compute-0 python3.9[75419]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:27:38 compute-0 sudo[75417]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:38 compute-0 sudo[75569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffvrdunyneyxmeueohwycecwwpaxeacm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276058.4643276-47-125907278925830/AnsiballZ_blockinfile.py'
Dec 09 10:27:38 compute-0 sudo[75569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:39 compute-0 python3.9[75571]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDv43LwrvO/gTL5Xi96EfG8s5Ayv191kgICPs2cVCBwk4tOW9h/Dv7UMFE4J7XWWq1TFTMQqpThcjvmTS5Xuo7AdEiokw1vLIZsf0vjtk7OvI7Yti49pI/u0vh+G4vx8o7KVujYLEewkVontw/WbNQQN+SwSMPRQ81nPFesWTO2JFSTjqdWZHIbI9rkYDVkKj13u8yq0jMW5rgcs6fxi8w3oGr1u+GGsoUyVflWBxXFdVgsTzVD8MfpdJzlj/RP703OORL/hThWPR4rJbHAnViikRKxtRtaapgWnX6/LxtCN4ABljRaTJTzt7Qq3mPhwzBFUwYhRrZFXAmqbgu4ex2WozgNWaExPfY1OoiqRwUnkf+SzP4huNSGGATK6z7g+GgokoCiygdpulhHWKbbsZWW9fgkg+MPZG1co20bbVqWHpc/RtJh/mxB9vyUFkMT+FbjGdJgqU32U/O1jdaq4BMpGiX3cPceWjDn7WD/K7hPe8VuMOOMpuFzc6gHecvPHIU=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILqagmFyoIqRaVbKtHBiXCBRn68yqvKxDUudOdMGI1Vg
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKSHg2oYvBPxog5Vh8F78eEolpPsw5tANvlmr58EvVDdki2zd2UmC3f7nz98GeQTYqAJMp0MOwA9Esm0RnH8p0s=
                                             create=True mode=0644 path=/tmp/ansible.n3ywtrf7 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:39 compute-0 sudo[75569]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:39 compute-0 sudo[75721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvsjvqcmbrulkjfovbbbhdpavuvzltic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276059.2389297-55-120188950158029/AnsiballZ_command.py'
Dec 09 10:27:39 compute-0 sudo[75721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:39 compute-0 python3.9[75723]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.n3ywtrf7' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:27:39 compute-0 sudo[75721]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:40 compute-0 sudo[75875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggshktkxcizdtjzceamhdghsaeralpqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276060.13987-63-227501335768310/AnsiballZ_file.py'
Dec 09 10:27:40 compute-0 sudo[75875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:40 compute-0 python3.9[75877]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.n3ywtrf7 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:40 compute-0 sudo[75875]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:41 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 09 10:27:41 compute-0 sshd-session[74963]: Connection closed by 192.168.122.30 port 44690
Dec 09 10:27:41 compute-0 sshd-session[74960]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:27:41 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Dec 09 10:27:41 compute-0 systemd[1]: session-16.scope: Consumed 3.320s CPU time.
Dec 09 10:27:41 compute-0 systemd-logind[806]: Session 16 logged out. Waiting for processes to exit.
Dec 09 10:27:41 compute-0 systemd-logind[806]: Removed session 16.
Dec 09 10:27:46 compute-0 sshd-session[75905]: Accepted publickey for zuul from 192.168.122.30 port 37974 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:27:46 compute-0 systemd-logind[806]: New session 17 of user zuul.
Dec 09 10:27:46 compute-0 systemd[1]: Started Session 17 of User zuul.
Dec 09 10:27:46 compute-0 sshd-session[75905]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:27:47 compute-0 python3.9[76058]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:27:48 compute-0 sudo[76212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fllizdwlictyxfdglouavihinlegvhaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276068.20323-32-68973138726706/AnsiballZ_systemd.py'
Dec 09 10:27:48 compute-0 sudo[76212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:49 compute-0 python3.9[76214]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 09 10:27:49 compute-0 sudo[76212]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:49 compute-0 sudo[76366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwxtvgmagybzggwmxiwjdwzyvnbrwzbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276069.2801223-40-239746894011105/AnsiballZ_systemd.py'
Dec 09 10:27:49 compute-0 sudo[76366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:49 compute-0 python3.9[76368]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:27:49 compute-0 sudo[76366]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:50 compute-0 sudo[76519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-powisaquuqpaklgzvascpicwoeaogstf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276070.271606-49-134697602580195/AnsiballZ_command.py'
Dec 09 10:27:50 compute-0 sudo[76519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:51 compute-0 python3.9[76521]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:27:51 compute-0 sudo[76519]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:51 compute-0 sudo[76672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhvedecupjjgygikubdafhctdyoiuith ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276071.4254763-57-190188413592685/AnsiballZ_stat.py'
Dec 09 10:27:51 compute-0 sudo[76672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:51 compute-0 python3.9[76674]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:27:52 compute-0 sudo[76672]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:52 compute-0 sudo[76826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbejcpmudksmvbqghxpzkbbinavowhea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276072.1747656-65-120746043963575/AnsiballZ_command.py'
Dec 09 10:27:52 compute-0 sudo[76826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:52 compute-0 python3.9[76828]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:27:52 compute-0 sudo[76826]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:53 compute-0 sudo[76981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdsunlxlclkjjqlmouazdvdtgbribrlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276072.7881-73-62942881826879/AnsiballZ_file.py'
Dec 09 10:27:53 compute-0 sudo[76981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:27:53 compute-0 python3.9[76983]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:27:53 compute-0 sudo[76981]: pam_unix(sudo:session): session closed for user root
Dec 09 10:27:53 compute-0 sshd-session[75908]: Connection closed by 192.168.122.30 port 37974
Dec 09 10:27:53 compute-0 sshd-session[75905]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:27:53 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Dec 09 10:27:53 compute-0 systemd[1]: session-17.scope: Consumed 4.303s CPU time.
Dec 09 10:27:53 compute-0 systemd-logind[806]: Session 17 logged out. Waiting for processes to exit.
Dec 09 10:27:53 compute-0 systemd-logind[806]: Removed session 17.
Dec 09 10:27:55 compute-0 sshd-session[77009]: Invalid user admin from 159.223.8.217 port 42908
Dec 09 10:27:55 compute-0 sshd-session[77009]: Connection closed by invalid user admin 159.223.8.217 port 42908 [preauth]
Dec 09 10:27:59 compute-0 sshd-session[77011]: Accepted publickey for zuul from 192.168.122.30 port 56312 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:27:59 compute-0 systemd-logind[806]: New session 18 of user zuul.
Dec 09 10:27:59 compute-0 systemd[1]: Started Session 18 of User zuul.
Dec 09 10:27:59 compute-0 sshd-session[77011]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:28:00 compute-0 python3.9[77164]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:28:00 compute-0 sudo[77318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbjxxoqsedyrttixuubixwuwxiwixkxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276080.5581653-34-143561990612077/AnsiballZ_setup.py'
Dec 09 10:28:00 compute-0 sudo[77318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:01 compute-0 python3.9[77320]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 10:28:01 compute-0 sudo[77318]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:01 compute-0 sudo[77402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibtejnmlojonseclgmodjeddckmrxoya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276080.5581653-34-143561990612077/AnsiballZ_dnf.py'
Dec 09 10:28:01 compute-0 sudo[77402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:02 compute-0 python3.9[77404]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 09 10:28:08 compute-0 sudo[77402]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:09 compute-0 python3.9[77555]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:28:10 compute-0 python3.9[77706]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 09 10:28:11 compute-0 python3.9[77856]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:28:13 compute-0 python3.9[78006]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:28:13 compute-0 sshd-session[77014]: Connection closed by 192.168.122.30 port 56312
Dec 09 10:28:13 compute-0 sshd-session[77011]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:28:13 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Dec 09 10:28:13 compute-0 systemd[1]: session-18.scope: Consumed 5.953s CPU time.
Dec 09 10:28:13 compute-0 systemd-logind[806]: Session 18 logged out. Waiting for processes to exit.
Dec 09 10:28:13 compute-0 systemd-logind[806]: Removed session 18.
Dec 09 10:28:19 compute-0 sshd-session[78031]: Accepted publickey for zuul from 192.168.122.30 port 50630 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:28:19 compute-0 systemd-logind[806]: New session 19 of user zuul.
Dec 09 10:28:19 compute-0 systemd[1]: Started Session 19 of User zuul.
Dec 09 10:28:19 compute-0 sshd-session[78031]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:28:20 compute-0 python3.9[78184]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:28:21 compute-0 sudo[78338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aywukqtzsabdzygwmrwipfcekmlvcnwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276101.4021595-50-89265460691448/AnsiballZ_file.py'
Dec 09 10:28:21 compute-0 sudo[78338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:22 compute-0 python3.9[78340]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:28:22 compute-0 sudo[78338]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:22 compute-0 sudo[78490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noehgisuvsmzedtxhowjuistnfzifqst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276102.282713-50-114342811568545/AnsiballZ_file.py'
Dec 09 10:28:22 compute-0 sudo[78490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:22 compute-0 python3.9[78492]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:28:22 compute-0 sudo[78490]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:23 compute-0 sudo[78642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akfjpkztwafqypyqcbffvtewzcrvizkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276102.9273956-65-129169995818331/AnsiballZ_stat.py'
Dec 09 10:28:23 compute-0 sudo[78642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:23 compute-0 python3.9[78644]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:23 compute-0 sudo[78642]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:24 compute-0 sudo[78765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvujqxikscdtbwqjhfsbyrjqmljithmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276102.9273956-65-129169995818331/AnsiballZ_copy.py'
Dec 09 10:28:24 compute-0 sudo[78765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:24 compute-0 python3.9[78767]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276102.9273956-65-129169995818331/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=75a7e9e20dc76ad58c9acf2930576cbfbda395a4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:24 compute-0 sudo[78765]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:24 compute-0 sudo[78917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymtpocfdgblnjeqcfyvpqoozjazfjdbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276104.3579967-65-258333822793911/AnsiballZ_stat.py'
Dec 09 10:28:24 compute-0 sudo[78917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:24 compute-0 python3.9[78919]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:24 compute-0 sudo[78917]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:25 compute-0 sudo[79040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmbftbnzmfbcpipksigbwiwrryxlwvpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276104.3579967-65-258333822793911/AnsiballZ_copy.py'
Dec 09 10:28:25 compute-0 sudo[79040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:25 compute-0 python3.9[79042]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276104.3579967-65-258333822793911/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=9df84e4bac3f43116987c3ebed189a3674efd35b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:25 compute-0 sudo[79040]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:25 compute-0 sudo[79192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqurjhlrlribeowipxeeazaswidovuev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276105.533749-65-36776499896762/AnsiballZ_stat.py'
Dec 09 10:28:25 compute-0 sudo[79192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:26 compute-0 python3.9[79194]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:26 compute-0 sudo[79192]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:26 compute-0 sudo[79315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tklijyqswlniyoqjxshjxalaaszsgkqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276105.533749-65-36776499896762/AnsiballZ_copy.py'
Dec 09 10:28:26 compute-0 sudo[79315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:26 compute-0 python3.9[79317]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276105.533749-65-36776499896762/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=81ecd318380a0c62250c6dfd9bd06fa8e226946d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:26 compute-0 sudo[79315]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:27 compute-0 sudo[79469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjmvpfxvifwchcxgwrcvyzhgjbuqhwlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276106.7717214-109-267249043724650/AnsiballZ_file.py'
Dec 09 10:28:27 compute-0 sudo[79469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:27 compute-0 sshd-session[79341]: Invalid user admin from 159.223.8.217 port 34060
Dec 09 10:28:27 compute-0 sshd-session[79341]: Connection closed by invalid user admin 159.223.8.217 port 34060 [preauth]
Dec 09 10:28:27 compute-0 python3.9[79471]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:28:27 compute-0 sudo[79469]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:27 compute-0 sudo[79621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aewjmzyhqrxqvextpuzhyosywpoblxib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276107.360508-109-248451118621266/AnsiballZ_file.py'
Dec 09 10:28:27 compute-0 sudo[79621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:27 compute-0 python3.9[79623]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:28:27 compute-0 sudo[79621]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:28 compute-0 sudo[79773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txnznqhvimqjhryocvjsycdaayusmjom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276107.9996395-124-59153080833839/AnsiballZ_stat.py'
Dec 09 10:28:28 compute-0 sudo[79773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:28 compute-0 python3.9[79775]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:28 compute-0 sudo[79773]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:28 compute-0 sudo[79896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uusyjxkkkzpcrpuropdvlyqrlbklpiow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276107.9996395-124-59153080833839/AnsiballZ_copy.py'
Dec 09 10:28:28 compute-0 sudo[79896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:29 compute-0 python3.9[79898]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276107.9996395-124-59153080833839/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=5fe32fc24b7a620b8aaf126f59be1f3926a68fae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:29 compute-0 sudo[79896]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:29 compute-0 sudo[80048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwhayjarjkidwzztbtolsstgqplontzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276109.2193258-124-265071081167496/AnsiballZ_stat.py'
Dec 09 10:28:29 compute-0 sudo[80048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:29 compute-0 python3.9[80050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:29 compute-0 sudo[80048]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:30 compute-0 sudo[80171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvihtuasyjldvdkulftuuzsxbzpspkak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276109.2193258-124-265071081167496/AnsiballZ_copy.py'
Dec 09 10:28:30 compute-0 sudo[80171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:30 compute-0 python3.9[80173]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276109.2193258-124-265071081167496/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=9df84e4bac3f43116987c3ebed189a3674efd35b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:30 compute-0 sudo[80171]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:30 compute-0 sudo[80323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibpmwwlxtbrqegwukttlzioirtvlslfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276110.442211-124-56659737235395/AnsiballZ_stat.py'
Dec 09 10:28:30 compute-0 sudo[80323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:30 compute-0 python3.9[80325]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:30 compute-0 sudo[80323]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:31 compute-0 sudo[80446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaneaelziezazfethrhaotrairznveno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276110.442211-124-56659737235395/AnsiballZ_copy.py'
Dec 09 10:28:31 compute-0 sudo[80446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:31 compute-0 python3.9[80448]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276110.442211-124-56659737235395/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=00418fc570d3edb4f3d9d3c39240c089672a1574 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:31 compute-0 sudo[80446]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:32 compute-0 sudo[80598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzvtqvhxhxocqhwcldnjlyefdfxgqxzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276111.6968484-168-145892824397787/AnsiballZ_file.py'
Dec 09 10:28:32 compute-0 sudo[80598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:32 compute-0 python3.9[80600]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:28:32 compute-0 sudo[80598]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:32 compute-0 sudo[80750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gczjgqdsdcqvnroduowmrcppplunwsnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276112.3716826-168-210839317080053/AnsiballZ_file.py'
Dec 09 10:28:32 compute-0 sudo[80750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:32 compute-0 python3.9[80752]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:28:32 compute-0 sudo[80750]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:33 compute-0 sudo[80902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idikucmmjwzgzvcwfazrasdkrmzxlabb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276113.0904078-183-75482094781542/AnsiballZ_stat.py'
Dec 09 10:28:33 compute-0 sudo[80902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:33 compute-0 python3.9[80904]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:33 compute-0 sudo[80902]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:34 compute-0 sudo[81025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syljedpbvoywakdtofwnogxyucdnijzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276113.0904078-183-75482094781542/AnsiballZ_copy.py'
Dec 09 10:28:34 compute-0 sudo[81025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:34 compute-0 python3.9[81027]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276113.0904078-183-75482094781542/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4c5e5c8af445d0967df7f1b7f4471a9274abcc23 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:34 compute-0 sudo[81025]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:34 compute-0 sudo[81177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqjehvatzgursombpppwqnsjocdodaki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276114.5987546-183-167151409502444/AnsiballZ_stat.py'
Dec 09 10:28:34 compute-0 sudo[81177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:35 compute-0 python3.9[81179]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:35 compute-0 sudo[81177]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:35 compute-0 sudo[81300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imqhaxzczsmjljylqctqalhbrbneuiyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276114.5987546-183-167151409502444/AnsiballZ_copy.py'
Dec 09 10:28:35 compute-0 sudo[81300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:35 compute-0 python3.9[81302]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276114.5987546-183-167151409502444/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=1313c346a0698525bbe53f7c57ab00060cafd46a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:35 compute-0 sudo[81300]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:36 compute-0 sudo[81453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzyuggwmokntjslerhjizmqejpghzuyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276115.7257328-183-81561118347136/AnsiballZ_stat.py'
Dec 09 10:28:36 compute-0 sudo[81453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:36 compute-0 python3.9[81455]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:36 compute-0 sudo[81453]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:36 compute-0 sudo[81576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsohprozqpektltnorsoebrnljgmhraw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276115.7257328-183-81561118347136/AnsiballZ_copy.py'
Dec 09 10:28:36 compute-0 sudo[81576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:36 compute-0 python3.9[81578]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276115.7257328-183-81561118347136/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=44e491aa84b4894862306ada50cb8a2277a0e8b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:36 compute-0 sudo[81576]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:37 compute-0 sudo[81728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwbycajfmywkoqgmbcnqsvmubhzpoogm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276116.9823785-227-41592289023664/AnsiballZ_file.py'
Dec 09 10:28:37 compute-0 sudo[81728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:37 compute-0 python3.9[81730]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:28:37 compute-0 sudo[81728]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:38 compute-0 sudo[81880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sksgxbbboadgmpkbmebvljfioyvvktkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276117.7144914-227-166125142568796/AnsiballZ_file.py'
Dec 09 10:28:38 compute-0 sudo[81880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:38 compute-0 python3.9[81882]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:28:38 compute-0 sudo[81880]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:38 compute-0 sudo[82032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lozewoeezenhjjurbpcwxalgeouaufnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276118.712649-242-65731930974114/AnsiballZ_stat.py'
Dec 09 10:28:38 compute-0 sudo[82032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:39 compute-0 python3.9[82034]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:39 compute-0 sudo[82032]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:39 compute-0 sudo[82155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmgvdhrxabxfchhgzvanpbcyvvtqctun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276118.712649-242-65731930974114/AnsiballZ_copy.py'
Dec 09 10:28:39 compute-0 sudo[82155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:40 compute-0 python3.9[82157]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276118.712649-242-65731930974114/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b05fc4daa8b74b9267eb85bc6bd8920570ae2c1d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:40 compute-0 sudo[82155]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:40 compute-0 sudo[82307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wstetqpjcdtcabphaxzvjfbbribxpcof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276120.1661706-242-40923561531610/AnsiballZ_stat.py'
Dec 09 10:28:40 compute-0 sudo[82307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:40 compute-0 python3.9[82309]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:40 compute-0 sudo[82307]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:40 compute-0 sudo[82430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkzqmctfgjchlmuyyyqqmsxrwijviijq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276120.1661706-242-40923561531610/AnsiballZ_copy.py'
Dec 09 10:28:41 compute-0 sudo[82430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:41 compute-0 python3.9[82432]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276120.1661706-242-40923561531610/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a7839e75e6e9877663be873ed0d35c6c4602de60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:41 compute-0 sudo[82430]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:41 compute-0 sudo[82582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-layzeclhjmgbixrazvwezfdujlqusepg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276121.3730447-242-184885627754381/AnsiballZ_stat.py'
Dec 09 10:28:41 compute-0 sudo[82582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:41 compute-0 python3.9[82584]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:41 compute-0 sudo[82582]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:42 compute-0 sudo[82705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlejtoigwmrawzkivhfxlmymkgkepbos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276121.3730447-242-184885627754381/AnsiballZ_copy.py'
Dec 09 10:28:42 compute-0 sudo[82705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:42 compute-0 python3.9[82707]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276121.3730447-242-184885627754381/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=4b46a13133a454949174e674a943f145d1adf614 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:42 compute-0 sudo[82705]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:42 compute-0 sudo[82857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrmxtnsacsjgstbpqvgkqnvntakcqbus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276122.61645-286-169484308943590/AnsiballZ_file.py'
Dec 09 10:28:42 compute-0 sudo[82857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:43 compute-0 python3.9[82859]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:28:43 compute-0 chronyd[65742]: Selected source 216.197.156.83 (pool.ntp.org)
Dec 09 10:28:43 compute-0 sudo[82857]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:43 compute-0 sudo[83009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwjkiffjvxjcxdjcgpcqxcnxcaqdtzsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276123.26462-286-181137113696205/AnsiballZ_file.py'
Dec 09 10:28:43 compute-0 sudo[83009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:43 compute-0 python3.9[83011]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:28:43 compute-0 sudo[83009]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:44 compute-0 sudo[83161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvfqazgwuublqazuwrlskxysuhewppak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276123.8987713-301-71883151834388/AnsiballZ_stat.py'
Dec 09 10:28:44 compute-0 sudo[83161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:44 compute-0 python3.9[83163]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:44 compute-0 sudo[83161]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:44 compute-0 sudo[83284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqlciabpvzdujxymokxqwpveohlkumfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276123.8987713-301-71883151834388/AnsiballZ_copy.py'
Dec 09 10:28:44 compute-0 sudo[83284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:44 compute-0 python3.9[83286]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276123.8987713-301-71883151834388/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=fca906f19b0b1f3bce9c5ba5c3f336719253898e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:44 compute-0 sudo[83284]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:45 compute-0 sudo[83436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghgghfrigngdeajxwhgexnvjuesixauc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276125.0135424-301-133645921273745/AnsiballZ_stat.py'
Dec 09 10:28:45 compute-0 sudo[83436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:45 compute-0 python3.9[83438]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:45 compute-0 sudo[83436]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:45 compute-0 sudo[83559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfqehkyiozzgvxkkljoflaqpnttolhmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276125.0135424-301-133645921273745/AnsiballZ_copy.py'
Dec 09 10:28:45 compute-0 sudo[83559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:46 compute-0 python3.9[83561]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276125.0135424-301-133645921273745/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=1313c346a0698525bbe53f7c57ab00060cafd46a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:46 compute-0 sudo[83559]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:46 compute-0 sudo[83711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubknyrkmlemnqmihxzkyzhtuobvywotu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276126.272177-301-9933633826078/AnsiballZ_stat.py'
Dec 09 10:28:46 compute-0 sudo[83711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:46 compute-0 python3.9[83713]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:46 compute-0 sudo[83711]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:47 compute-0 sudo[83834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfetrwcwrtohxnnlwdmdrkfzbdmgjzqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276126.272177-301-9933633826078/AnsiballZ_copy.py'
Dec 09 10:28:47 compute-0 sudo[83834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:47 compute-0 python3.9[83836]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276126.272177-301-9933633826078/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=91b1916769aa393e01d030ea5ed9be50a21e092f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:47 compute-0 sudo[83834]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:48 compute-0 sudo[83986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqjbhxmwwvlarofwofaagtuyjxypovfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276127.9111464-361-7453556768855/AnsiballZ_file.py'
Dec 09 10:28:48 compute-0 sudo[83986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:48 compute-0 python3.9[83988]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:28:48 compute-0 sudo[83986]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:48 compute-0 sudo[84138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqbfgmukatikfjulryahocqbxycbwhhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276128.5555575-369-272080541908209/AnsiballZ_stat.py'
Dec 09 10:28:48 compute-0 sudo[84138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:49 compute-0 python3.9[84140]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:49 compute-0 sudo[84138]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:49 compute-0 sudo[84261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgtlicgpezhcmlydbtogtfpnrhnsxnpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276128.5555575-369-272080541908209/AnsiballZ_copy.py'
Dec 09 10:28:49 compute-0 sudo[84261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:49 compute-0 python3.9[84263]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276128.5555575-369-272080541908209/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:49 compute-0 sudo[84261]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:50 compute-0 sudo[84413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xltdysukgcoewserqpavyoiqvzojidqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276130.1818967-385-146453081806035/AnsiballZ_file.py'
Dec 09 10:28:50 compute-0 sudo[84413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:50 compute-0 python3.9[84415]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:28:50 compute-0 sudo[84413]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:51 compute-0 sudo[84565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tycxakoigevbigyedamghyozymfyleru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276130.909626-393-221996386453576/AnsiballZ_stat.py'
Dec 09 10:28:51 compute-0 sudo[84565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:51 compute-0 python3.9[84567]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:51 compute-0 sudo[84565]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:51 compute-0 sudo[84688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esqwcdtxtbkjvzxdqocnlwtnbhxdyqws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276130.909626-393-221996386453576/AnsiballZ_copy.py'
Dec 09 10:28:51 compute-0 sudo[84688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:51 compute-0 python3.9[84690]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276130.909626-393-221996386453576/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:51 compute-0 sudo[84688]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:52 compute-0 sudo[84840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usvbfqvjttvztsmutiyebfjlaofqpexg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276132.1559181-409-10640568392266/AnsiballZ_file.py'
Dec 09 10:28:52 compute-0 sudo[84840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:52 compute-0 python3.9[84842]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:28:52 compute-0 sudo[84840]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:53 compute-0 sudo[84992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgktqxmmtegnswnqnatcyqdipjlepmqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276132.925269-417-30369901980855/AnsiballZ_stat.py'
Dec 09 10:28:53 compute-0 sudo[84992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:53 compute-0 python3.9[84994]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:53 compute-0 sudo[84992]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:53 compute-0 sudo[85116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzdcldpwkovhqgvmnxjhonshkqrocqpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276132.925269-417-30369901980855/AnsiballZ_copy.py'
Dec 09 10:28:53 compute-0 sudo[85116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:54 compute-0 python3.9[85118]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276132.925269-417-30369901980855/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:54 compute-0 sudo[85116]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:54 compute-0 sudo[85268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xenbtgptdlujdltclfasgvmnjipnpqjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276134.417628-433-274898510340717/AnsiballZ_file.py'
Dec 09 10:28:54 compute-0 sudo[85268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:54 compute-0 python3.9[85270]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:28:54 compute-0 sudo[85268]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:55 compute-0 sudo[85420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hszbpkswqowdmhtkikrkdjkevercdrms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276135.1637125-441-94079578207693/AnsiballZ_stat.py'
Dec 09 10:28:55 compute-0 sudo[85420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:55 compute-0 python3.9[85422]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:55 compute-0 sudo[85420]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:56 compute-0 sudo[85543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoziyraubnpjiwaropepsfdvzxxafzsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276135.1637125-441-94079578207693/AnsiballZ_copy.py'
Dec 09 10:28:56 compute-0 sudo[85543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:56 compute-0 python3.9[85545]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276135.1637125-441-94079578207693/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:56 compute-0 sudo[85543]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:57 compute-0 sudo[85695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kakbkncwmltarqenmxrtsqskoxjdfbij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276136.8904843-457-26983475072282/AnsiballZ_file.py'
Dec 09 10:28:57 compute-0 sudo[85695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:57 compute-0 python3.9[85697]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:28:57 compute-0 sudo[85695]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:57 compute-0 sshd-session[85698]: Invalid user admin from 159.223.8.217 port 59364
Dec 09 10:28:57 compute-0 sshd-session[85698]: Connection closed by invalid user admin 159.223.8.217 port 59364 [preauth]
Dec 09 10:28:57 compute-0 sudo[85849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rplxbezrfzejmbncobgncczaktxeuvqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276137.5972657-465-174793431027287/AnsiballZ_stat.py'
Dec 09 10:28:57 compute-0 sudo[85849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:58 compute-0 python3.9[85851]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:28:58 compute-0 sudo[85849]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:58 compute-0 sudo[85972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmrdwfrsrmiwfhhdmejzotsjobjebtid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276137.5972657-465-174793431027287/AnsiballZ_copy.py'
Dec 09 10:28:58 compute-0 sudo[85972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:58 compute-0 python3.9[85974]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276137.5972657-465-174793431027287/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:28:58 compute-0 sudo[85972]: pam_unix(sudo:session): session closed for user root
Dec 09 10:28:59 compute-0 sudo[86124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccowjcbbcygdhdklooxtjnqldwyghxkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276138.9318604-481-15817572261357/AnsiballZ_file.py'
Dec 09 10:28:59 compute-0 sudo[86124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:28:59 compute-0 python3.9[86126]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:28:59 compute-0 sudo[86124]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:00 compute-0 sudo[86276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbaxzntcnvkachfvrmpcwrjtejilfkel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276139.8085477-489-202365913457828/AnsiballZ_stat.py'
Dec 09 10:29:00 compute-0 sudo[86276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:00 compute-0 python3.9[86278]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:29:00 compute-0 sudo[86276]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:00 compute-0 sudo[86399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anfbtahlgqrvfspeyjnbavvvuetrtswo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276139.8085477-489-202365913457828/AnsiballZ_copy.py'
Dec 09 10:29:00 compute-0 sudo[86399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:00 compute-0 python3.9[86401]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276139.8085477-489-202365913457828/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:00 compute-0 sudo[86399]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:01 compute-0 sudo[86551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwpxzjqbphpaeojvoenmbdxfcsazwosz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276141.1213188-505-99653776418051/AnsiballZ_file.py'
Dec 09 10:29:01 compute-0 sudo[86551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:01 compute-0 python3.9[86553]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:29:01 compute-0 sudo[86551]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:02 compute-0 sudo[86703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvisvbpktnmhwzyczdkhdxhbnjevirzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276141.7960346-513-7173669678923/AnsiballZ_stat.py'
Dec 09 10:29:02 compute-0 sudo[86703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:02 compute-0 python3.9[86705]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:29:02 compute-0 sudo[86703]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:02 compute-0 sudo[86826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eciychgipnlvsmmgxcgbxtvrbfwpwfcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276141.7960346-513-7173669678923/AnsiballZ_copy.py'
Dec 09 10:29:02 compute-0 sudo[86826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:02 compute-0 python3.9[86828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276141.7960346-513-7173669678923/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:02 compute-0 sudo[86826]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:03 compute-0 sudo[86978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiktjeckaaisqwvbawmwvsvcrdqhrejv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276143.1165059-529-7199929281229/AnsiballZ_file.py'
Dec 09 10:29:03 compute-0 sudo[86978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:03 compute-0 python3.9[86980]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:29:03 compute-0 sudo[86978]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:04 compute-0 sudo[87130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otvmqywwdfjsucqqsmjtpukfsmafgkac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276143.8510988-537-2921110883999/AnsiballZ_stat.py'
Dec 09 10:29:04 compute-0 sudo[87130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:04 compute-0 python3.9[87132]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:29:04 compute-0 sudo[87130]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:04 compute-0 sudo[87253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxftpyazopxpbjlxjbdeolpidbmtqpsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276143.8510988-537-2921110883999/AnsiballZ_copy.py'
Dec 09 10:29:04 compute-0 sudo[87253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:04 compute-0 python3.9[87255]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276143.8510988-537-2921110883999/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:04 compute-0 sudo[87253]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:05 compute-0 sshd-session[78034]: Connection closed by 192.168.122.30 port 50630
Dec 09 10:29:05 compute-0 sshd-session[78031]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:29:05 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Dec 09 10:29:05 compute-0 systemd[1]: session-19.scope: Consumed 36.500s CPU time.
Dec 09 10:29:05 compute-0 systemd-logind[806]: Session 19 logged out. Waiting for processes to exit.
Dec 09 10:29:05 compute-0 systemd-logind[806]: Removed session 19.
Dec 09 10:29:11 compute-0 sshd-session[87280]: Accepted publickey for zuul from 192.168.122.30 port 33840 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:29:11 compute-0 systemd-logind[806]: New session 20 of user zuul.
Dec 09 10:29:11 compute-0 systemd[1]: Started Session 20 of User zuul.
Dec 09 10:29:11 compute-0 sshd-session[87280]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:29:12 compute-0 python3.9[87433]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:29:13 compute-0 sudo[87587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izmstahzaowktntztztaloelbohniwtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276153.3884382-34-82916402229298/AnsiballZ_file.py'
Dec 09 10:29:13 compute-0 sudo[87587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:13 compute-0 python3.9[87589]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:29:14 compute-0 sudo[87587]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:14 compute-0 sudo[87739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-petnsdcmglandrmqfyghoruwvcmurfoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276154.1511257-34-151054225652067/AnsiballZ_file.py'
Dec 09 10:29:14 compute-0 sudo[87739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:14 compute-0 python3.9[87741]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:29:14 compute-0 sudo[87739]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:15 compute-0 python3.9[87891]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:29:15 compute-0 sudo[88041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwyrjvqrlwxpyvwinxqnbmjslngzindj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276155.5358872-57-75187969934719/AnsiballZ_seboolean.py'
Dec 09 10:29:15 compute-0 sudo[88041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:16 compute-0 python3.9[88043]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 09 10:29:19 compute-0 sudo[88041]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:20 compute-0 sudo[88197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmcgqcgfiwbqteprdkdthxcrdjddtqxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276159.8492665-67-99031851070554/AnsiballZ_setup.py'
Dec 09 10:29:20 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 09 10:29:20 compute-0 sudo[88197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:20 compute-0 python3.9[88199]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 10:29:20 compute-0 sudo[88197]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:21 compute-0 sudo[88281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqpeywpqgoyqqgtjkyjddujqcwxcixkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276159.8492665-67-99031851070554/AnsiballZ_dnf.py'
Dec 09 10:29:21 compute-0 sudo[88281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:21 compute-0 python3.9[88283]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 10:29:22 compute-0 sudo[88281]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:23 compute-0 sudo[88434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqbpbyilwnihfrhzjxlleeinpklislto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276162.8384469-79-253115596505048/AnsiballZ_systemd.py'
Dec 09 10:29:23 compute-0 sudo[88434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:23 compute-0 python3.9[88436]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 10:29:23 compute-0 sudo[88434]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:24 compute-0 sudo[88589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kerfyacnnjcrmfngnfmtdllcedtxuagy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765276164.005482-87-51977572863846/AnsiballZ_edpm_nftables_snippet.py'
Dec 09 10:29:24 compute-0 sudo[88589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:24 compute-0 python3[88591]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 09 10:29:24 compute-0 sudo[88589]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:25 compute-0 sudo[88741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhfkpslekdfuhvnyddrqmomerboeqqez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276164.8769476-96-109478149224878/AnsiballZ_file.py'
Dec 09 10:29:25 compute-0 sudo[88741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:25 compute-0 python3.9[88743]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:25 compute-0 sudo[88741]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:25 compute-0 sudo[88893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxrysitjjsrqkvzftaanitettkpbqclo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276165.475479-104-49877413013744/AnsiballZ_stat.py'
Dec 09 10:29:25 compute-0 sudo[88893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:26 compute-0 python3.9[88895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:29:26 compute-0 sudo[88893]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:26 compute-0 sudo[88971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvabovupgsuuwujvtrzesdnqejkzxvns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276165.475479-104-49877413013744/AnsiballZ_file.py'
Dec 09 10:29:26 compute-0 sudo[88971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:26 compute-0 python3.9[88973]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:26 compute-0 sudo[88971]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:26 compute-0 sudo[89123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voaldpcwfqkgmqljrckpwwbbusmxegxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276166.7200956-116-246397069650752/AnsiballZ_stat.py'
Dec 09 10:29:26 compute-0 sudo[89123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:27 compute-0 python3.9[89125]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:29:27 compute-0 sudo[89123]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:27 compute-0 sshd-session[89126]: Invalid user admin from 159.223.8.217 port 53528
Dec 09 10:29:27 compute-0 sudo[89203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jagvkbqvapjokzslfxvlasbcvdsjzrzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276166.7200956-116-246397069650752/AnsiballZ_file.py'
Dec 09 10:29:27 compute-0 sudo[89203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:27 compute-0 sshd-session[89126]: Connection closed by invalid user admin 159.223.8.217 port 53528 [preauth]
Dec 09 10:29:27 compute-0 python3.9[89205]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.aiq2tro9 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:27 compute-0 sudo[89203]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:28 compute-0 sudo[89355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwqgmfsptjkjcchlfqfdihnoouamgkow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276167.7662134-128-226912797621322/AnsiballZ_stat.py'
Dec 09 10:29:28 compute-0 sudo[89355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:28 compute-0 python3.9[89357]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:29:28 compute-0 sudo[89355]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:28 compute-0 sudo[89433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsqgazdfnyyqizjcedezakheksbzdihn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276167.7662134-128-226912797621322/AnsiballZ_file.py'
Dec 09 10:29:28 compute-0 sudo[89433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:28 compute-0 python3.9[89435]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:28 compute-0 sudo[89433]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:29 compute-0 sudo[89585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwmeibmoejgxibjjpelyyvpqqjkuqmte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276168.8691056-141-173813816670504/AnsiballZ_command.py'
Dec 09 10:29:29 compute-0 sudo[89585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:29 compute-0 python3.9[89587]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:29:29 compute-0 sudo[89585]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:30 compute-0 sudo[89738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-badutkqdevwuqcoqwenmljrgepdeqrhp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765276169.6577492-149-196753350993790/AnsiballZ_edpm_nftables_from_files.py'
Dec 09 10:29:30 compute-0 sudo[89738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:30 compute-0 python3[89740]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 09 10:29:30 compute-0 sudo[89738]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:30 compute-0 sudo[89890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbwufioatvwflxsdhwymqahbnhxvijuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276170.5889516-157-81726110283577/AnsiballZ_stat.py'
Dec 09 10:29:30 compute-0 sudo[89890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:31 compute-0 python3.9[89892]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:29:31 compute-0 sudo[89890]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:31 compute-0 sudo[90015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekftohhvyoclnvvdzdtgyyaghsyidihq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276170.5889516-157-81726110283577/AnsiballZ_copy.py'
Dec 09 10:29:31 compute-0 sudo[90015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:31 compute-0 python3.9[90017]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276170.5889516-157-81726110283577/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:31 compute-0 sudo[90015]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:32 compute-0 sudo[90167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mktanugoylgnlhyaunbrijoskifmcoss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276172.1185-172-221601782413379/AnsiballZ_stat.py'
Dec 09 10:29:32 compute-0 sudo[90167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:32 compute-0 python3.9[90169]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:29:32 compute-0 sudo[90167]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:33 compute-0 sudo[90292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buycfzpapfeofdaizygvgocuywmlhsuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276172.1185-172-221601782413379/AnsiballZ_copy.py'
Dec 09 10:29:33 compute-0 sudo[90292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:33 compute-0 python3.9[90294]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276172.1185-172-221601782413379/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:33 compute-0 sudo[90292]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:34 compute-0 sudo[90444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eliawiedejuqhrjsexrewfkjyiegvstw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276173.555542-187-160583411595189/AnsiballZ_stat.py'
Dec 09 10:29:34 compute-0 sudo[90444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:34 compute-0 python3.9[90446]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:29:34 compute-0 sudo[90444]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:34 compute-0 sudo[90569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmyouaogseerlnnftlpvapkytrpnevhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276173.555542-187-160583411595189/AnsiballZ_copy.py'
Dec 09 10:29:34 compute-0 sudo[90569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:35 compute-0 python3.9[90571]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276173.555542-187-160583411595189/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:35 compute-0 sudo[90569]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:35 compute-0 sudo[90721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqdltwgmfuvcujvkiaklznkxlxkyiamb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276175.2173684-202-135455074933278/AnsiballZ_stat.py'
Dec 09 10:29:35 compute-0 sudo[90721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:35 compute-0 python3.9[90723]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:29:35 compute-0 sudo[90721]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:36 compute-0 sudo[90846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxcxyhdntthumurmdumguygwdmruyncr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276175.2173684-202-135455074933278/AnsiballZ_copy.py'
Dec 09 10:29:36 compute-0 sudo[90846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:36 compute-0 python3.9[90848]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276175.2173684-202-135455074933278/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:36 compute-0 sudo[90846]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:36 compute-0 sudo[90998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-colpgqffvluyqgihqatohyetzrzxasxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276176.4554794-217-210027049466732/AnsiballZ_stat.py'
Dec 09 10:29:36 compute-0 sudo[90998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:37 compute-0 python3.9[91000]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:29:37 compute-0 sudo[90998]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:37 compute-0 sudo[91123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkzsshzvgapqpsdroopkkbpchltbrewb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276176.4554794-217-210027049466732/AnsiballZ_copy.py'
Dec 09 10:29:37 compute-0 sudo[91123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:37 compute-0 python3.9[91125]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276176.4554794-217-210027049466732/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:37 compute-0 sudo[91123]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:38 compute-0 sudo[91275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjxvjfgwxrtwagmmnwqftodlvasppkep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276177.7544768-232-140379687683825/AnsiballZ_file.py'
Dec 09 10:29:38 compute-0 sudo[91275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:38 compute-0 python3.9[91277]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:38 compute-0 sudo[91275]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:38 compute-0 sudo[91427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxsfypujaatjlgcamtojvonkkndsidzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276178.362582-240-110347899178826/AnsiballZ_command.py'
Dec 09 10:29:38 compute-0 sudo[91427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:38 compute-0 python3.9[91429]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:29:38 compute-0 sudo[91427]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:39 compute-0 sudo[91582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skzfvulfqfufdrwylapxfpbxwssfloza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276179.1195588-248-278840401937506/AnsiballZ_blockinfile.py'
Dec 09 10:29:39 compute-0 sudo[91582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:39 compute-0 python3.9[91584]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:39 compute-0 sudo[91582]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:40 compute-0 sudo[91734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcobbidfjrkqhsvrnozdzebzvvjxlhjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276179.9686103-257-66280504634852/AnsiballZ_command.py'
Dec 09 10:29:40 compute-0 sudo[91734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:40 compute-0 python3.9[91736]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:29:40 compute-0 sudo[91734]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:40 compute-0 sudo[91887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqancctcwhjouxcobjjseyjommumqtpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276180.616641-265-272742791181028/AnsiballZ_stat.py'
Dec 09 10:29:40 compute-0 sudo[91887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:41 compute-0 python3.9[91889]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:29:41 compute-0 sudo[91887]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:41 compute-0 sudo[92041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xveezgiyhbbmxaimxtbcytejortxgldl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276181.2512703-273-129503316065793/AnsiballZ_command.py'
Dec 09 10:29:41 compute-0 sudo[92041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:41 compute-0 python3.9[92043]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:29:41 compute-0 sudo[92041]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:42 compute-0 sudo[92196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tggidtljypxtiwysmfxnpbbvxeaqbkhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276181.898357-281-145539330906463/AnsiballZ_file.py'
Dec 09 10:29:42 compute-0 sudo[92196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:42 compute-0 python3.9[92198]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:42 compute-0 sudo[92196]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:43 compute-0 python3.9[92348]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:29:44 compute-0 sudo[92499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqvcyijagpmiifyhdazbxrdpfwrsfcfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276184.1284962-321-98626999740033/AnsiballZ_command.py'
Dec 09 10:29:44 compute-0 sudo[92499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:44 compute-0 python3.9[92501]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:29:44 compute-0 ovs-vsctl[92502]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 09 10:29:44 compute-0 sudo[92499]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:45 compute-0 sudo[92652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyiuqhqycffogplmfupoclqsqlfpihkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276184.7976067-330-189933836254706/AnsiballZ_command.py'
Dec 09 10:29:45 compute-0 sudo[92652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:45 compute-0 python3.9[92654]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:29:45 compute-0 sudo[92652]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:45 compute-0 sudo[92807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umjgkfjmhefehadaqivqugepwtcxhjnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276185.4402053-338-256550318235734/AnsiballZ_command.py'
Dec 09 10:29:45 compute-0 sudo[92807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:45 compute-0 python3.9[92809]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:29:45 compute-0 ovs-vsctl[92810]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 09 10:29:45 compute-0 sudo[92807]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:46 compute-0 python3.9[92960]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:29:47 compute-0 sudo[93112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lduxignetvbzrjrlhhuigffximjqrklv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276186.741241-355-219461342363070/AnsiballZ_file.py'
Dec 09 10:29:47 compute-0 sudo[93112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:47 compute-0 python3.9[93114]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:29:47 compute-0 sudo[93112]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:47 compute-0 sudo[93264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztninqntwieemwhpymfogownnchhwlgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276187.4171302-363-218449472949205/AnsiballZ_stat.py'
Dec 09 10:29:47 compute-0 sudo[93264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:47 compute-0 python3.9[93266]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:29:47 compute-0 sudo[93264]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:48 compute-0 sudo[93342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfcvwaohafopjqaheupjizjwzaonxplt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276187.4171302-363-218449472949205/AnsiballZ_file.py'
Dec 09 10:29:48 compute-0 sudo[93342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:48 compute-0 python3.9[93344]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:29:48 compute-0 sudo[93342]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:48 compute-0 sudo[93494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzupaasarvmripevpltmkitjrdfkpjmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276188.447406-363-90792623709693/AnsiballZ_stat.py'
Dec 09 10:29:48 compute-0 sudo[93494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:48 compute-0 python3.9[93496]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:29:48 compute-0 sudo[93494]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:49 compute-0 sudo[93572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckbzgdharobvnbsmqmciyrflyzmwqslq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276188.447406-363-90792623709693/AnsiballZ_file.py'
Dec 09 10:29:49 compute-0 sudo[93572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:49 compute-0 python3.9[93574]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:29:49 compute-0 sudo[93572]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:49 compute-0 sudo[93724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdfvvlekicdoaaqmosdlfbrpxykvaoud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276189.4607606-386-25636664268090/AnsiballZ_file.py'
Dec 09 10:29:49 compute-0 sudo[93724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:49 compute-0 python3.9[93726]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:49 compute-0 sudo[93724]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:50 compute-0 sudo[93876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viuotqswzkljtumoijqmtmrmftdmslnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276190.087432-394-141197055181978/AnsiballZ_stat.py'
Dec 09 10:29:50 compute-0 sudo[93876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:50 compute-0 python3.9[93878]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:29:50 compute-0 sudo[93876]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:50 compute-0 sudo[93954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzhyqwgcrmzmciwrfjgbgvvvkvcvisxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276190.087432-394-141197055181978/AnsiballZ_file.py'
Dec 09 10:29:50 compute-0 sudo[93954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:50 compute-0 python3.9[93956]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:50 compute-0 sudo[93954]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:51 compute-0 sudo[94106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aysktmleqhllvbjwceigetfcpkafhgdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276191.1443117-406-42788332220828/AnsiballZ_stat.py'
Dec 09 10:29:51 compute-0 sudo[94106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:54 compute-0 python3.9[94108]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:29:54 compute-0 sudo[94106]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:54 compute-0 sudo[94184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfnhuurudwaxuepzdcyvbcgbenwpzmsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276191.1443117-406-42788332220828/AnsiballZ_file.py'
Dec 09 10:29:54 compute-0 sudo[94184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:54 compute-0 python3.9[94186]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:54 compute-0 sudo[94184]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:54 compute-0 sudo[94336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sulejtzwktjsixfxjrkjqnimvbykxode ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276194.6884139-418-56308132624467/AnsiballZ_systemd.py'
Dec 09 10:29:54 compute-0 sudo[94336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:55 compute-0 python3.9[94338]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:29:55 compute-0 systemd[1]: Reloading.
Dec 09 10:29:55 compute-0 systemd-sysv-generator[94370]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:29:55 compute-0 systemd-rc-local-generator[94365]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:29:55 compute-0 sudo[94336]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:55 compute-0 sudo[94526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikxdblsnagehisxtpmhgmsbazuoeeevc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276195.7519727-426-233067321540665/AnsiballZ_stat.py'
Dec 09 10:29:55 compute-0 sudo[94526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:56 compute-0 python3.9[94528]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:29:56 compute-0 sudo[94526]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:56 compute-0 sudo[94604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hndscbeebrxmqanudidyjugosrmlqiir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276195.7519727-426-233067321540665/AnsiballZ_file.py'
Dec 09 10:29:56 compute-0 sudo[94604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:56 compute-0 python3.9[94606]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:56 compute-0 sudo[94604]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:56 compute-0 sudo[94756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdqecknvzihvnneycgaqiquxgaldnfod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276196.7566848-438-3369458594509/AnsiballZ_stat.py'
Dec 09 10:29:57 compute-0 sudo[94756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:57 compute-0 python3.9[94758]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:29:57 compute-0 sudo[94756]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:57 compute-0 sudo[94836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrfjerixivpbnzrvsvynzvmknvvmtfvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276196.7566848-438-3369458594509/AnsiballZ_file.py'
Dec 09 10:29:57 compute-0 sudo[94836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:57 compute-0 python3.9[94838]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:29:57 compute-0 sudo[94836]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:57 compute-0 sshd-session[94761]: Invalid user admin from 159.223.8.217 port 48518
Dec 09 10:29:57 compute-0 sshd-session[94761]: Connection closed by invalid user admin 159.223.8.217 port 48518 [preauth]
Dec 09 10:29:58 compute-0 sudo[94988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwlkqfzkcfapdfflhppzrwszgkflgdcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276197.7536488-450-241476205691877/AnsiballZ_systemd.py'
Dec 09 10:29:58 compute-0 sudo[94988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:58 compute-0 python3.9[94990]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:29:58 compute-0 systemd[1]: Reloading.
Dec 09 10:29:58 compute-0 systemd-rc-local-generator[95015]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:29:58 compute-0 systemd-sysv-generator[95020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:29:58 compute-0 systemd[1]: Starting Create netns directory...
Dec 09 10:29:58 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 09 10:29:58 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 09 10:29:58 compute-0 systemd[1]: Finished Create netns directory.
Dec 09 10:29:58 compute-0 sudo[94988]: pam_unix(sudo:session): session closed for user root
Dec 09 10:29:59 compute-0 sudo[95182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agzvccoyhhvgsmqugnouxiyljtlhvycs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276198.9301023-460-258414408407763/AnsiballZ_file.py'
Dec 09 10:29:59 compute-0 sudo[95182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:29:59 compute-0 python3.9[95184]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:29:59 compute-0 sudo[95182]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:00 compute-0 sudo[95334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuerwguommziqvfrvjafaxefksrhncjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276199.9331644-468-258861547626813/AnsiballZ_stat.py'
Dec 09 10:30:00 compute-0 sudo[95334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:00 compute-0 python3.9[95336]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:30:00 compute-0 sudo[95334]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:00 compute-0 sudo[95457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzzvgkeettbxudmbbrgqdlahcbnlydcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276199.9331644-468-258861547626813/AnsiballZ_copy.py'
Dec 09 10:30:00 compute-0 sudo[95457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:01 compute-0 python3.9[95459]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276199.9331644-468-258861547626813/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:30:01 compute-0 sudo[95457]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:01 compute-0 sudo[95611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnxskghsmurgllxkzoqkrwfgtnwfqqbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276201.4656093-485-109060554034434/AnsiballZ_file.py'
Dec 09 10:30:01 compute-0 sudo[95611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:02 compute-0 python3.9[95613]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:30:02 compute-0 sudo[95611]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:02 compute-0 sudo[95763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uigrhbvhmcgbzatuerdnwvmrmpthtcdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276202.1966534-493-41773020000150/AnsiballZ_stat.py'
Dec 09 10:30:02 compute-0 sudo[95763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:02 compute-0 python3.9[95765]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:30:02 compute-0 sudo[95763]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:02 compute-0 sshd-session[95490]: Received disconnect from 193.46.255.217 port 25550:11:  [preauth]
Dec 09 10:30:02 compute-0 sshd-session[95490]: Disconnected from authenticating user root 193.46.255.217 port 25550 [preauth]
Dec 09 10:30:02 compute-0 sudo[95886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvdngiurqvplvsniznhyuaykhvldjukp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276202.1966534-493-41773020000150/AnsiballZ_copy.py'
Dec 09 10:30:02 compute-0 sudo[95886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:03 compute-0 python3.9[95888]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276202.1966534-493-41773020000150/.source.json _original_basename=.l1fj4oga follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:30:03 compute-0 sudo[95886]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:03 compute-0 sudo[96038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpmkvlgpedtlvanxjdjokfpyjqzwqrco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276203.4009845-508-48250855460467/AnsiballZ_file.py'
Dec 09 10:30:03 compute-0 sudo[96038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:03 compute-0 python3.9[96040]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:30:03 compute-0 sudo[96038]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:04 compute-0 sudo[96190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxfpiylwmiboooosbbabyytgmqhutcjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276204.092825-516-56020788976309/AnsiballZ_stat.py'
Dec 09 10:30:04 compute-0 sudo[96190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:04 compute-0 sudo[96190]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:04 compute-0 sudo[96313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqgpqgvigaoetejdbrbhquiwqilcklbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276204.092825-516-56020788976309/AnsiballZ_copy.py'
Dec 09 10:30:04 compute-0 sudo[96313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:05 compute-0 sudo[96313]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:06 compute-0 sudo[96465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivxelujiivbghbpknzxoxyciueorlper ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276205.687467-533-24946992172466/AnsiballZ_container_config_data.py'
Dec 09 10:30:06 compute-0 sudo[96465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:06 compute-0 python3.9[96467]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 09 10:30:06 compute-0 sudo[96465]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:07 compute-0 sudo[96617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grafluvrjxpxwioltsfhvsxgppksgcwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276206.6041815-542-107057226866166/AnsiballZ_container_config_hash.py'
Dec 09 10:30:07 compute-0 sudo[96617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:07 compute-0 python3.9[96619]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 09 10:30:07 compute-0 sudo[96617]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:07 compute-0 sudo[96769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfregpejaujoswdrmrtoeombfpwnmzoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276207.480331-551-58301817529332/AnsiballZ_podman_container_info.py'
Dec 09 10:30:07 compute-0 sudo[96769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:08 compute-0 python3.9[96771]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 09 10:30:08 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:30:08 compute-0 sudo[96769]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:09 compute-0 sudo[96932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rokqtoeqcigzdqlaoovwbgqyiscrzcvt ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765276208.568773-564-170712511407329/AnsiballZ_edpm_container_manage.py'
Dec 09 10:30:09 compute-0 sudo[96932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:09 compute-0 python3[96934]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 09 10:30:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:30:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:30:09 compute-0 podman[96970]: 2025-12-09 10:30:09.510480093 +0000 UTC m=+0.019761720 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 09 10:30:10 compute-0 podman[96970]: 2025-12-09 10:30:10.999655491 +0000 UTC m=+1.508937128 container create e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 09 10:30:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:30:11 compute-0 python3[96934]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 09 10:30:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:30:11 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 09 10:30:11 compute-0 sudo[96932]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:11 compute-0 sudo[97154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wecqcfjzlkymdpehwybbkoepwpnhoazq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276211.3066828-572-131375226737574/AnsiballZ_stat.py'
Dec 09 10:30:11 compute-0 sudo[97154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:11 compute-0 python3.9[97156]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:30:11 compute-0 sudo[97154]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:12 compute-0 sudo[97308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsfglirqeuphmhpwvsasxhgvbdctwrmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276212.0855637-581-144509033149179/AnsiballZ_file.py'
Dec 09 10:30:12 compute-0 sudo[97308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:12 compute-0 python3.9[97310]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:30:12 compute-0 sudo[97308]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:12 compute-0 sudo[97384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iodorbcqukjiydjuvkguxkfjibrvxkix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276212.0855637-581-144509033149179/AnsiballZ_stat.py'
Dec 09 10:30:12 compute-0 sudo[97384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:12 compute-0 python3.9[97386]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:30:12 compute-0 sudo[97384]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:13 compute-0 sudo[97535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozyicvqlsmybhgvfbbfvswfrtmtehldf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276213.0607538-581-8441588140945/AnsiballZ_copy.py'
Dec 09 10:30:13 compute-0 sudo[97535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:13 compute-0 python3.9[97537]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276213.0607538-581-8441588140945/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:30:13 compute-0 sudo[97535]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:14 compute-0 sudo[97611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqxqsuvxufocnuaepgycoprrmhpafbuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276213.0607538-581-8441588140945/AnsiballZ_systemd.py'
Dec 09 10:30:14 compute-0 sudo[97611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:14 compute-0 python3.9[97613]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:30:14 compute-0 systemd[1]: Reloading.
Dec 09 10:30:14 compute-0 systemd-rc-local-generator[97641]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:30:14 compute-0 systemd-sysv-generator[97645]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:30:14 compute-0 sudo[97611]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:14 compute-0 sudo[97723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkxeuimmksfannvsmufdkyubmcoykfio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276213.0607538-581-8441588140945/AnsiballZ_systemd.py'
Dec 09 10:30:14 compute-0 sudo[97723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:15 compute-0 python3.9[97725]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:30:15 compute-0 systemd[1]: Reloading.
Dec 09 10:30:15 compute-0 systemd-rc-local-generator[97753]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:30:15 compute-0 systemd-sysv-generator[97757]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:30:15 compute-0 systemd[1]: Starting ovn_controller container...
Dec 09 10:30:16 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 09 10:30:16 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:30:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657d7a39cf9cd41507dcd7760d5ebf320949ccaec507605954f660be41deb58c/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 09 10:30:16 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.
Dec 09 10:30:17 compute-0 podman[97765]: 2025-12-09 10:30:17.163882455 +0000 UTC m=+1.443466823 container init e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller)
Dec 09 10:30:17 compute-0 ovn_controller[97780]: + sudo -E kolla_set_configs
Dec 09 10:30:17 compute-0 podman[97765]: 2025-12-09 10:30:17.193902751 +0000 UTC m=+1.473487089 container start e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:30:17 compute-0 systemd[1]: Created slice User Slice of UID 0.
Dec 09 10:30:17 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 09 10:30:17 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 09 10:30:17 compute-0 systemd[1]: Starting User Manager for UID 0...
Dec 09 10:30:17 compute-0 systemd[97798]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Dec 09 10:30:17 compute-0 systemd[97798]: Queued start job for default target Main User Target.
Dec 09 10:30:17 compute-0 systemd[97798]: Created slice User Application Slice.
Dec 09 10:30:17 compute-0 systemd[97798]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 09 10:30:17 compute-0 systemd[97798]: Started Daily Cleanup of User's Temporary Directories.
Dec 09 10:30:17 compute-0 systemd[97798]: Reached target Paths.
Dec 09 10:30:17 compute-0 systemd[97798]: Reached target Timers.
Dec 09 10:30:17 compute-0 systemd[97798]: Starting D-Bus User Message Bus Socket...
Dec 09 10:30:17 compute-0 systemd[97798]: Starting Create User's Volatile Files and Directories...
Dec 09 10:30:17 compute-0 systemd[97798]: Listening on D-Bus User Message Bus Socket.
Dec 09 10:30:17 compute-0 systemd[97798]: Reached target Sockets.
Dec 09 10:30:17 compute-0 systemd[97798]: Finished Create User's Volatile Files and Directories.
Dec 09 10:30:17 compute-0 systemd[97798]: Reached target Basic System.
Dec 09 10:30:17 compute-0 systemd[97798]: Reached target Main User Target.
Dec 09 10:30:17 compute-0 systemd[97798]: Startup finished in 151ms.
Dec 09 10:30:17 compute-0 systemd[1]: Started User Manager for UID 0.
Dec 09 10:30:17 compute-0 systemd[1]: Started Session c1 of User root.
Dec 09 10:30:17 compute-0 ovn_controller[97780]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 09 10:30:17 compute-0 ovn_controller[97780]: INFO:__main__:Validating config file
Dec 09 10:30:17 compute-0 ovn_controller[97780]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 09 10:30:17 compute-0 ovn_controller[97780]: INFO:__main__:Writing out command to execute
Dec 09 10:30:17 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Dec 09 10:30:17 compute-0 ovn_controller[97780]: ++ cat /run_command
Dec 09 10:30:17 compute-0 ovn_controller[97780]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 09 10:30:17 compute-0 ovn_controller[97780]: + ARGS=
Dec 09 10:30:17 compute-0 ovn_controller[97780]: + sudo kolla_copy_cacerts
Dec 09 10:30:17 compute-0 systemd[1]: Started Session c2 of User root.
Dec 09 10:30:17 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Dec 09 10:30:17 compute-0 ovn_controller[97780]: + [[ ! -n '' ]]
Dec 09 10:30:17 compute-0 ovn_controller[97780]: + . kolla_extend_start
Dec 09 10:30:17 compute-0 ovn_controller[97780]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec 09 10:30:17 compute-0 ovn_controller[97780]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 09 10:30:17 compute-0 ovn_controller[97780]: + umask 0022
Dec 09 10:30:17 compute-0 ovn_controller[97780]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 09 10:30:17 compute-0 NetworkManager[56302]: <info>  [1765276217.7326] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 09 10:30:17 compute-0 NetworkManager[56302]: <info>  [1765276217.7335] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 10:30:17 compute-0 NetworkManager[56302]: <warn>  [1765276217.7339] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 10:30:17 compute-0 NetworkManager[56302]: <info>  [1765276217.7349] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Dec 09 10:30:17 compute-0 NetworkManager[56302]: <info>  [1765276217.7358] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Dec 09 10:30:17 compute-0 NetworkManager[56302]: <info>  [1765276217.7363] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 09 10:30:17 compute-0 kernel: br-int: entered promiscuous mode
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00022|main|INFO|OVS feature set changed, force recompute.
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 09 10:30:17 compute-0 ovn_controller[97780]: 2025-12-09T10:30:17Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 09 10:30:17 compute-0 NetworkManager[56302]: <info>  [1765276217.7721] manager: (ovn-54258f-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec 09 10:30:17 compute-0 systemd-udevd[97826]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 10:30:17 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Dec 09 10:30:17 compute-0 systemd-udevd[97829]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 10:30:17 compute-0 NetworkManager[56302]: <info>  [1765276217.8063] device (genev_sys_6081): carrier: link connected
Dec 09 10:30:17 compute-0 NetworkManager[56302]: <info>  [1765276217.8069] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Dec 09 10:30:17 compute-0 edpm-start-podman-container[97765]: ovn_controller
Dec 09 10:30:17 compute-0 edpm-start-podman-container[97764]: Creating additional drop-in dependency for "ovn_controller" (e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6)
Dec 09 10:30:17 compute-0 systemd[1]: Reloading.
Dec 09 10:30:18 compute-0 podman[97786]: 2025-12-09 10:30:18.050211848 +0000 UTC m=+0.842407068 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 09 10:30:18 compute-0 systemd-rc-local-generator[97895]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:30:18 compute-0 systemd-sysv-generator[97898]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:30:18 compute-0 systemd[1]: Started ovn_controller container.
Dec 09 10:30:18 compute-0 sudo[97723]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:18 compute-0 sudo[98052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmphahnpinjybvjbspyvclfywnghugkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276218.4647663-609-241713313107049/AnsiballZ_command.py'
Dec 09 10:30:18 compute-0 sudo[98052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:19 compute-0 python3.9[98054]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:30:19 compute-0 ovs-vsctl[98055]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 09 10:30:19 compute-0 sudo[98052]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:19 compute-0 sudo[98205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yznlzdegyrkssjyhdbencpzvikdqovce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276219.3454173-617-234334581318193/AnsiballZ_command.py'
Dec 09 10:30:19 compute-0 sudo[98205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:19 compute-0 python3.9[98207]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:30:20 compute-0 ovs-vsctl[98209]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 09 10:30:20 compute-0 sudo[98205]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:20 compute-0 sudo[98360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcectrfdssduyyfbhsrjkllzzpfskwgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276220.4050052-631-116778400125066/AnsiballZ_command.py'
Dec 09 10:30:20 compute-0 sudo[98360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:20 compute-0 python3.9[98362]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:30:20 compute-0 ovs-vsctl[98363]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 09 10:30:20 compute-0 sudo[98360]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:21 compute-0 sshd-session[87283]: Connection closed by 192.168.122.30 port 33840
Dec 09 10:30:21 compute-0 sshd-session[87280]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:30:21 compute-0 systemd[1]: session-20.scope: Deactivated successfully.
Dec 09 10:30:21 compute-0 systemd[1]: session-20.scope: Consumed 45.753s CPU time.
Dec 09 10:30:21 compute-0 systemd-logind[806]: Session 20 logged out. Waiting for processes to exit.
Dec 09 10:30:21 compute-0 systemd-logind[806]: Removed session 20.
Dec 09 10:30:26 compute-0 sshd-session[98389]: Accepted publickey for zuul from 192.168.122.30 port 37986 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:30:26 compute-0 systemd-logind[806]: New session 22 of user zuul.
Dec 09 10:30:26 compute-0 systemd[1]: Started Session 22 of User zuul.
Dec 09 10:30:26 compute-0 sshd-session[98389]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:30:27 compute-0 systemd[1]: Stopping User Manager for UID 0...
Dec 09 10:30:27 compute-0 systemd[97798]: Activating special unit Exit the Session...
Dec 09 10:30:27 compute-0 systemd[97798]: Stopped target Main User Target.
Dec 09 10:30:27 compute-0 systemd[97798]: Stopped target Basic System.
Dec 09 10:30:27 compute-0 systemd[97798]: Stopped target Paths.
Dec 09 10:30:27 compute-0 systemd[97798]: Stopped target Sockets.
Dec 09 10:30:27 compute-0 systemd[97798]: Stopped target Timers.
Dec 09 10:30:27 compute-0 systemd[97798]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 09 10:30:27 compute-0 systemd[97798]: Closed D-Bus User Message Bus Socket.
Dec 09 10:30:27 compute-0 systemd[97798]: Stopped Create User's Volatile Files and Directories.
Dec 09 10:30:27 compute-0 systemd[97798]: Removed slice User Application Slice.
Dec 09 10:30:27 compute-0 systemd[97798]: Reached target Shutdown.
Dec 09 10:30:27 compute-0 systemd[97798]: Finished Exit the Session.
Dec 09 10:30:27 compute-0 systemd[97798]: Reached target Exit the Session.
Dec 09 10:30:27 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Dec 09 10:30:27 compute-0 systemd[1]: Stopped User Manager for UID 0.
Dec 09 10:30:27 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 09 10:30:27 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 09 10:30:27 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 09 10:30:27 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 09 10:30:27 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Dec 09 10:30:28 compute-0 python3.9[98544]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:30:28 compute-0 sshd-session[98545]: Invalid user admin from 159.223.8.217 port 37370
Dec 09 10:30:28 compute-0 sshd-session[98545]: Connection closed by invalid user admin 159.223.8.217 port 37370 [preauth]
Dec 09 10:30:29 compute-0 sudo[98700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzjaqcxyvvjgoqfzxkibxolikuupmaew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276228.6731246-34-120474204013913/AnsiballZ_file.py'
Dec 09 10:30:29 compute-0 sudo[98700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:29 compute-0 python3.9[98702]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:30:29 compute-0 sudo[98700]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:29 compute-0 sudo[98852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azoftaofbkqupkpxorhcjnoqnbihsqjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276229.613045-34-182691171674916/AnsiballZ_file.py'
Dec 09 10:30:29 compute-0 sudo[98852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:30 compute-0 python3.9[98854]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:30:30 compute-0 sudo[98852]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:30 compute-0 sudo[99004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujibcduxdfwvjvvztfqbublfrvtqpogp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276230.2408917-34-260328675994449/AnsiballZ_file.py'
Dec 09 10:30:30 compute-0 sudo[99004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:30 compute-0 python3.9[99006]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:30:30 compute-0 sudo[99004]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:31 compute-0 sudo[99156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csuexqplhhbajkwpktirdpeutjwgrtgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276230.85633-34-95951673858719/AnsiballZ_file.py'
Dec 09 10:30:31 compute-0 sudo[99156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:31 compute-0 python3.9[99158]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:30:31 compute-0 sudo[99156]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:31 compute-0 sudo[99308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdstknyddtvigpfuygglijptrgqhxfoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276231.5581007-34-38125384753298/AnsiballZ_file.py'
Dec 09 10:30:31 compute-0 sudo[99308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:32 compute-0 python3.9[99310]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:30:32 compute-0 sudo[99308]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:32 compute-0 python3.9[99460]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:30:33 compute-0 sudo[99610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyelxwhwfvwjvlrjjlvvauxukalgwkwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276232.951762-78-77977872192431/AnsiballZ_seboolean.py'
Dec 09 10:30:33 compute-0 sudo[99610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:33 compute-0 python3.9[99612]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 09 10:30:34 compute-0 sudo[99610]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:35 compute-0 python3.9[99762]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:30:35 compute-0 python3.9[99883]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276234.4913602-86-229279897171716/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:30:36 compute-0 python3.9[100034]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:30:37 compute-0 python3.9[100155]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276236.0862122-101-252001484919297/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:30:37 compute-0 sudo[100305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anjyogyfevdidambqvfnycypvtknaqxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276237.5216875-118-239891529311538/AnsiballZ_setup.py'
Dec 09 10:30:37 compute-0 sudo[100305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:38 compute-0 python3.9[100307]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 10:30:38 compute-0 sudo[100305]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:38 compute-0 sudo[100389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghhpuyfndnxyrjpyoqmpqhhhhclipnpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276237.5216875-118-239891529311538/AnsiballZ_dnf.py'
Dec 09 10:30:38 compute-0 sudo[100389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:39 compute-0 python3.9[100391]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 10:30:40 compute-0 sudo[100389]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:41 compute-0 sudo[100542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isvlqfoyxjxwsnvmrecjaeohjltztrwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276240.5732284-130-85476901864060/AnsiballZ_systemd.py'
Dec 09 10:30:41 compute-0 sudo[100542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:41 compute-0 python3.9[100544]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 10:30:41 compute-0 sudo[100542]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:42 compute-0 python3.9[100697]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:30:43 compute-0 python3.9[100818]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276241.821226-138-230799949340148/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:30:43 compute-0 python3.9[100968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:30:44 compute-0 python3.9[101089]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276243.4243717-138-260805423351174/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:30:45 compute-0 python3.9[101239]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:30:46 compute-0 python3.9[101360]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276245.1735392-182-275146435253262/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:30:46 compute-0 python3.9[101510]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:30:47 compute-0 python3.9[101631]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276246.4171643-182-33605177935004/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:30:48 compute-0 python3.9[101781]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:30:48 compute-0 sudo[101941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lskobpmlxeghosbkeedioipymtmlgnoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276248.3386147-220-42878038646344/AnsiballZ_file.py'
Dec 09 10:30:48 compute-0 sudo[101941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:48 compute-0 ovn_controller[97780]: 2025-12-09T10:30:48Z|00025|memory|INFO|16128 kB peak resident set size after 31.0 seconds
Dec 09 10:30:48 compute-0 ovn_controller[97780]: 2025-12-09T10:30:48Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec 09 10:30:48 compute-0 podman[101907]: 2025-12-09 10:30:48.70987903 +0000 UTC m=+0.120667817 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 09 10:30:48 compute-0 python3.9[101946]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:30:48 compute-0 sudo[101941]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:49 compute-0 sudo[102108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajywlghjcpqbgbrlunszqvmtjmqvnwjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276249.0098524-228-5403795021786/AnsiballZ_stat.py'
Dec 09 10:30:49 compute-0 sudo[102108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:49 compute-0 python3.9[102110]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:30:49 compute-0 sudo[102108]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:49 compute-0 sudo[102186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odcmgywwxpactyfedrdwtlajjzwjpstw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276249.0098524-228-5403795021786/AnsiballZ_file.py'
Dec 09 10:30:49 compute-0 sudo[102186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:49 compute-0 python3.9[102188]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:30:49 compute-0 sudo[102186]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:50 compute-0 sudo[102338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vatrgnukfeaxdoaycklccgvzrwhlhvvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276250.1449494-228-122448984966780/AnsiballZ_stat.py'
Dec 09 10:30:50 compute-0 sudo[102338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:50 compute-0 python3.9[102340]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:30:50 compute-0 sudo[102338]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:50 compute-0 sudo[102416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vikfkbuarrizjlghumirmmubkmedsntf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276250.1449494-228-122448984966780/AnsiballZ_file.py'
Dec 09 10:30:50 compute-0 sudo[102416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:51 compute-0 python3.9[102418]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:30:51 compute-0 sudo[102416]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:51 compute-0 sudo[102568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwndllgnikopvelptjxqdzxsyvoybrzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276251.3377666-251-142073044199028/AnsiballZ_file.py'
Dec 09 10:30:51 compute-0 sudo[102568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:51 compute-0 python3.9[102570]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:30:51 compute-0 sudo[102568]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:52 compute-0 sudo[102720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iemtgtcfylhecaguvuxnzdpgazensbco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276252.022522-259-167908299119378/AnsiballZ_stat.py'
Dec 09 10:30:52 compute-0 sudo[102720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:52 compute-0 python3.9[102722]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:30:52 compute-0 sudo[102720]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:52 compute-0 sudo[102798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqfrwpqvrmglsafhttaiaiuhryoeexiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276252.022522-259-167908299119378/AnsiballZ_file.py'
Dec 09 10:30:52 compute-0 sudo[102798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:52 compute-0 python3.9[102800]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:30:52 compute-0 sudo[102798]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:53 compute-0 sudo[102950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxqokiqbcziiwhwinhusgxtlggtfdytd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276253.1508608-271-65638054192398/AnsiballZ_stat.py'
Dec 09 10:30:53 compute-0 sudo[102950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:53 compute-0 python3.9[102952]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:30:53 compute-0 sudo[102950]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:53 compute-0 sudo[103028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wejcyqwifjlsionpizcyewyxmgqwcdsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276253.1508608-271-65638054192398/AnsiballZ_file.py'
Dec 09 10:30:53 compute-0 sudo[103028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:54 compute-0 python3.9[103030]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:30:54 compute-0 sudo[103028]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:54 compute-0 sudo[103180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mizscdkslgjtlytkcnklwbqclllywerh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276254.295491-283-30248863497900/AnsiballZ_systemd.py'
Dec 09 10:30:54 compute-0 sudo[103180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:54 compute-0 python3.9[103182]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:30:54 compute-0 systemd[1]: Reloading.
Dec 09 10:30:54 compute-0 systemd-rc-local-generator[103207]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:30:54 compute-0 systemd-sysv-generator[103212]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:30:55 compute-0 sudo[103180]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:56 compute-0 sudo[103369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koqlgivjcfmmokrkgutwbhetsyhyeaax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276255.811046-291-199345598362471/AnsiballZ_stat.py'
Dec 09 10:30:56 compute-0 sudo[103369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:56 compute-0 python3.9[103371]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:30:56 compute-0 sudo[103369]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:56 compute-0 sudo[103447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnfoyyzuebrmuvceqvitusroljuzeala ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276255.811046-291-199345598362471/AnsiballZ_file.py'
Dec 09 10:30:56 compute-0 sudo[103447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:56 compute-0 python3.9[103449]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:30:56 compute-0 sudo[103447]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:57 compute-0 sudo[103599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djltlryxzhjcnkkxrxwfcfsfzjwuwavz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276257.039806-303-143373073737858/AnsiballZ_stat.py'
Dec 09 10:30:57 compute-0 sudo[103599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:57 compute-0 python3.9[103601]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:30:57 compute-0 sudo[103599]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:57 compute-0 sudo[103677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoxqknsborugjrjdngjflwfppcepdzfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276257.039806-303-143373073737858/AnsiballZ_file.py'
Dec 09 10:30:57 compute-0 sudo[103677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:58 compute-0 python3.9[103679]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:30:58 compute-0 sudo[103677]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:58 compute-0 sudo[103831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnlneflgbgbtetaonodhcxeiksxfkmos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276258.1809251-315-261740453339578/AnsiballZ_systemd.py'
Dec 09 10:30:58 compute-0 sudo[103831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:30:58 compute-0 sshd-session[103727]: Invalid user admin from 159.223.8.217 port 58742
Dec 09 10:30:58 compute-0 sshd-session[103727]: Connection closed by invalid user admin 159.223.8.217 port 58742 [preauth]
Dec 09 10:30:58 compute-0 python3.9[103833]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:30:58 compute-0 systemd[1]: Reloading.
Dec 09 10:30:58 compute-0 systemd-sysv-generator[103867]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:30:58 compute-0 systemd-rc-local-generator[103862]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:30:59 compute-0 systemd[1]: Starting Create netns directory...
Dec 09 10:30:59 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 09 10:30:59 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 09 10:30:59 compute-0 systemd[1]: Finished Create netns directory.
Dec 09 10:30:59 compute-0 sudo[103831]: pam_unix(sudo:session): session closed for user root
Dec 09 10:30:59 compute-0 sudo[104025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzjopnuoukxxtuwcufojsosgrfuctjlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276259.474404-325-146587477791354/AnsiballZ_file.py'
Dec 09 10:30:59 compute-0 sudo[104025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:00 compute-0 python3.9[104027]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:31:00 compute-0 sudo[104025]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:00 compute-0 sudo[104177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuzynfmgizvxdjllqcngcjzyjxnelwdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276260.2765355-333-100835462900902/AnsiballZ_stat.py'
Dec 09 10:31:00 compute-0 sudo[104177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:00 compute-0 python3.9[104179]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:31:00 compute-0 sudo[104177]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:01 compute-0 sudo[104300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlsyxoruoyvyxtohmkmjleoopzotatgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276260.2765355-333-100835462900902/AnsiballZ_copy.py'
Dec 09 10:31:01 compute-0 sudo[104300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:01 compute-0 python3.9[104302]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276260.2765355-333-100835462900902/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:31:01 compute-0 sudo[104300]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:02 compute-0 sudo[104452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzibwmogyhtntpzavddtnrlrqxxhcjlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276261.6674225-350-76607533513704/AnsiballZ_file.py'
Dec 09 10:31:02 compute-0 sudo[104452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:02 compute-0 python3.9[104454]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:31:02 compute-0 sudo[104452]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:02 compute-0 sudo[104604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpqglpesvgumbciuhbdxftnkecwzwxli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276262.4765635-358-114433721574861/AnsiballZ_stat.py'
Dec 09 10:31:02 compute-0 sudo[104604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:03 compute-0 python3.9[104606]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:31:03 compute-0 sudo[104604]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:03 compute-0 sudo[104727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzddnbrbqpzvejvhgmpythwmmpeomoax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276262.4765635-358-114433721574861/AnsiballZ_copy.py'
Dec 09 10:31:03 compute-0 sudo[104727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:03 compute-0 python3.9[104729]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276262.4765635-358-114433721574861/.source.json _original_basename=.550l_lr4 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:03 compute-0 sudo[104727]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:04 compute-0 sudo[104879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svtiegqxyplfntosprkjoliogxndmgqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276263.7964127-373-234853570543128/AnsiballZ_file.py'
Dec 09 10:31:04 compute-0 sudo[104879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:04 compute-0 python3.9[104881]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:04 compute-0 sudo[104879]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:05 compute-0 sudo[105031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqkxxxftefdiqhzvukrrklvcyfvxwiqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276264.6678274-381-154215751780188/AnsiballZ_stat.py'
Dec 09 10:31:05 compute-0 sudo[105031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:05 compute-0 sudo[105031]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:05 compute-0 sudo[105154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsmthhtriwefuontzqpfjurmitgelsxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276264.6678274-381-154215751780188/AnsiballZ_copy.py'
Dec 09 10:31:05 compute-0 sudo[105154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:05 compute-0 sudo[105154]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:06 compute-0 sudo[105306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogupwmwkcgkkqwbqmdsmbzhxgsqexntw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276266.1341686-398-199123160503322/AnsiballZ_container_config_data.py'
Dec 09 10:31:06 compute-0 sudo[105306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:06 compute-0 python3.9[105308]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 09 10:31:06 compute-0 sudo[105306]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:07 compute-0 sudo[105458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqhwnseqkebxsforsdfvtzzepfhsdtiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276267.0006227-407-215111677290160/AnsiballZ_container_config_hash.py'
Dec 09 10:31:07 compute-0 sudo[105458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:07 compute-0 python3.9[105460]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 09 10:31:07 compute-0 sudo[105458]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:08 compute-0 sudo[105610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gaqcjcrssbmmqqjwymcwznwdeynfdoet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276267.9005847-416-79181600950691/AnsiballZ_podman_container_info.py'
Dec 09 10:31:08 compute-0 sudo[105610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:08 compute-0 python3.9[105612]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 09 10:31:08 compute-0 sudo[105610]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:09 compute-0 sudo[105788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uymkowifdqvphobpekcgqafjbxkjinjc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765276269.3796432-429-27387141530660/AnsiballZ_edpm_container_manage.py'
Dec 09 10:31:09 compute-0 sudo[105788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:10 compute-0 python3[105790]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 09 10:31:10 compute-0 podman[105826]: 2025-12-09 10:31:10.320590891 +0000 UTC m=+0.057880659 container create 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 09 10:31:10 compute-0 podman[105826]: 2025-12-09 10:31:10.288094754 +0000 UTC m=+0.025384502 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 09 10:31:10 compute-0 python3[105790]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 09 10:31:10 compute-0 sudo[105788]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:10 compute-0 sudo[106013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncgnzhmodqqvblucoqbxczvalhfarwpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276270.639473-437-169828571923394/AnsiballZ_stat.py'
Dec 09 10:31:10 compute-0 sudo[106013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:11 compute-0 python3.9[106015]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:31:11 compute-0 sudo[106013]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:11 compute-0 sudo[106167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehhzijztrxvqspzzsikqloafcqbflmmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276271.3913717-446-64254269064478/AnsiballZ_file.py'
Dec 09 10:31:11 compute-0 sudo[106167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:11 compute-0 python3.9[106169]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:11 compute-0 sudo[106167]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:12 compute-0 sudo[106243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjuenqnnwilmwfftfxlqablcyhbcnpso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276271.3913717-446-64254269064478/AnsiballZ_stat.py'
Dec 09 10:31:12 compute-0 sudo[106243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:12 compute-0 python3.9[106245]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:31:12 compute-0 sudo[106243]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:12 compute-0 sudo[106394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpnigvidysoqygvzarhqengfydtdeusn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276272.496397-446-118819710670613/AnsiballZ_copy.py'
Dec 09 10:31:12 compute-0 sudo[106394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:13 compute-0 python3.9[106396]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276272.496397-446-118819710670613/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:13 compute-0 sudo[106394]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:13 compute-0 sudo[106470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toiwshepfrnclgrdcsnystukisybjdbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276272.496397-446-118819710670613/AnsiballZ_systemd.py'
Dec 09 10:31:13 compute-0 sudo[106470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:13 compute-0 python3.9[106472]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:31:13 compute-0 systemd[1]: Reloading.
Dec 09 10:31:13 compute-0 systemd-rc-local-generator[106498]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:31:13 compute-0 systemd-sysv-generator[106502]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:31:14 compute-0 sudo[106470]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:14 compute-0 sudo[106581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgchejvylnurttdseqcbbspfatbhqvtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276272.496397-446-118819710670613/AnsiballZ_systemd.py'
Dec 09 10:31:14 compute-0 sudo[106581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:14 compute-0 python3.9[106583]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:31:14 compute-0 systemd[1]: Reloading.
Dec 09 10:31:14 compute-0 systemd-rc-local-generator[106610]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:31:14 compute-0 systemd-sysv-generator[106616]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:31:14 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Dec 09 10:31:15 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d2449839733707e5a9b3894384d1e187573cc3f5bda89bccbba26ed260b5da/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 09 10:31:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d2449839733707e5a9b3894384d1e187573cc3f5bda89bccbba26ed260b5da/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 09 10:31:15 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.
Dec 09 10:31:15 compute-0 podman[106624]: 2025-12-09 10:31:15.059281268 +0000 UTC m=+0.130248355 container init 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: + sudo -E kolla_set_configs
Dec 09 10:31:15 compute-0 podman[106624]: 2025-12-09 10:31:15.090710243 +0000 UTC m=+0.161677280 container start 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 10:31:15 compute-0 edpm-start-podman-container[106624]: ovn_metadata_agent
Dec 09 10:31:15 compute-0 edpm-start-podman-container[106623]: Creating additional drop-in dependency for "ovn_metadata_agent" (8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403)
Dec 09 10:31:15 compute-0 podman[106646]: 2025-12-09 10:31:15.155417388 +0000 UTC m=+0.050324011 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 09 10:31:15 compute-0 systemd[1]: Reloading.
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: INFO:__main__:Validating config file
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: INFO:__main__:Copying service configuration files
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: INFO:__main__:Writing out command to execute
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: ++ cat /run_command
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: + CMD=neutron-ovn-metadata-agent
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: + ARGS=
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: + sudo kolla_copy_cacerts
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: + [[ ! -n '' ]]
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: + . kolla_extend_start
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: Running command: 'neutron-ovn-metadata-agent'
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: + umask 0022
Dec 09 10:31:15 compute-0 ovn_metadata_agent[106639]: + exec neutron-ovn-metadata-agent
Dec 09 10:31:15 compute-0 systemd-rc-local-generator[106711]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:31:15 compute-0 systemd-sysv-generator[106714]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:31:15 compute-0 systemd[1]: Started ovn_metadata_agent container.
Dec 09 10:31:15 compute-0 sudo[106581]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:16 compute-0 sshd-session[98392]: Connection closed by 192.168.122.30 port 37986
Dec 09 10:31:16 compute-0 sshd-session[98389]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:31:16 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Dec 09 10:31:16 compute-0 systemd[1]: session-22.scope: Consumed 36.257s CPU time.
Dec 09 10:31:16 compute-0 systemd-logind[806]: Session 22 logged out. Waiting for processes to exit.
Dec 09 10:31:16 compute-0 systemd-logind[806]: Removed session 22.
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.911 106644 INFO neutron.common.config [-] Logging enabled!
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.912 106644 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.912 106644 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.912 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.912 106644 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.912 106644 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.912 106644 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.912 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.932 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.932 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.932 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.932 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.932 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.932 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.932 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.936 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.936 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.936 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.936 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.936 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.936 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.939 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.939 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.939 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.939 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.939 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.939 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.939 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.940 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.940 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.940 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.940 106644 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.940 106644 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.940 106644 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.940 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.940 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.943 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.943 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.943 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.943 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.943 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.943 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.943 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.943 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.944 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.944 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.944 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.944 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.944 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.944 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.944 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.944 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.945 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.945 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.954 106644 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.954 106644 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.954 106644 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.955 106644 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.955 106644 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.969 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 9ec27861-bbe8-48fb-b30f-25b967e1609e (UUID: 9ec27861-bbe8-48fb-b30f-25b967e1609e) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.996 106644 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.996 106644 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.996 106644 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 09 10:31:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.997 106644 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.999 106644 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.005 106644 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.010 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '9ec27861-bbe8-48fb-b30f-25b967e1609e'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], external_ids={}, name=9ec27861-bbe8-48fb-b30f-25b967e1609e, nb_cfg_timestamp=1765276225763, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.011 106644 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fa01184a0d0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.012 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.012 106644 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.012 106644 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.013 106644 INFO oslo_service.service [-] Starting 1 workers
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.016 106644 DEBUG oslo_service.service [-] Started child 106752 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.020 106644 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmplxn4eun2/privsep.sock']
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.022 106752 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-167717'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.051 106752 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.051 106752 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.051 106752 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.055 106752 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.062 106752 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.068 106752 INFO eventlet.wsgi.server [-] (106752) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Dec 09 10:31:17 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.752 106644 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.753 106644 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmplxn4eun2/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.577 106757 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.583 106757 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.585 106757 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.586 106757 INFO oslo.privsep.daemon [-] privsep daemon running as pid 106757
Dec 09 10:31:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.756 106757 DEBUG oslo.privsep.daemon [-] privsep: reply[9b57e096-cee7-4896-93fe-8cfb0a377c52]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:31:18 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:18.238 106757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:31:18 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:18.238 106757 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:31:18 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:18.238 106757 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:31:18 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:18.773 106757 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9c2ce1-450d-44a9-a7ba-95e64bd97ad5]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:31:18 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:18.776 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, column=external_ids, values=({'neutron:ovn-metadata-id': 'bdbab969-a13d-5bc4-9a5f-1e6f9a29c628'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:31:18 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:18.929 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:31:18 compute-0 podman[106762]: 2025-12-09 10:31:18.962386016 +0000 UTC m=+0.110209526 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.191 106644 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.191 106644 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.191 106644 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.191 106644 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.191 106644 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.192 106644 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.192 106644 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.192 106644 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.192 106644 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.192 106644 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.193 106644 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.193 106644 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.193 106644 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.193 106644 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.194 106644 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.194 106644 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.194 106644 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.194 106644 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.195 106644 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.195 106644 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.195 106644 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.195 106644 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.195 106644 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.195 106644 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.196 106644 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.196 106644 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.196 106644 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.196 106644 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.197 106644 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.197 106644 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.197 106644 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.197 106644 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.197 106644 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.198 106644 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.198 106644 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.198 106644 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.198 106644 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.199 106644 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.199 106644 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.199 106644 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.199 106644 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.199 106644 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.199 106644 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.200 106644 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.200 106644 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.200 106644 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.200 106644 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.200 106644 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.200 106644 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.201 106644 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.201 106644 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.201 106644 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.201 106644 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.201 106644 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.201 106644 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.202 106644 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.202 106644 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.202 106644 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.202 106644 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.202 106644 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.203 106644 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.203 106644 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.203 106644 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.203 106644 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.203 106644 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.203 106644 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.204 106644 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.204 106644 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.204 106644 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.204 106644 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.204 106644 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.204 106644 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.204 106644 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.205 106644 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.205 106644 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.205 106644 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.205 106644 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.205 106644 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.205 106644 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.206 106644 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.206 106644 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.206 106644 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.206 106644 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.206 106644 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.206 106644 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.207 106644 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.207 106644 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.207 106644 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.207 106644 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.207 106644 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.207 106644 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.208 106644 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.208 106644 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.208 106644 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.208 106644 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.208 106644 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.208 106644 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.208 106644 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.209 106644 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.209 106644 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.209 106644 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.209 106644 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.209 106644 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.210 106644 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.210 106644 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.210 106644 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.210 106644 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.210 106644 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.211 106644 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.211 106644 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.211 106644 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.211 106644 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.211 106644 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.211 106644 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.212 106644 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.212 106644 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.212 106644 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.212 106644 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.212 106644 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.212 106644 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.213 106644 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.213 106644 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.213 106644 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.213 106644 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.213 106644 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.213 106644 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.214 106644 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.214 106644 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.214 106644 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.214 106644 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.214 106644 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.214 106644 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.215 106644 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.215 106644 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.215 106644 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.215 106644 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.215 106644 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.215 106644 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.216 106644 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.216 106644 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.216 106644 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.216 106644 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.216 106644 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.217 106644 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.217 106644 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.217 106644 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.217 106644 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.217 106644 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.217 106644 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.217 106644 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.218 106644 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.218 106644 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.218 106644 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.218 106644 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.218 106644 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.218 106644 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.219 106644 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.219 106644 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.219 106644 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.219 106644 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.219 106644 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.219 106644 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.219 106644 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.219 106644 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.220 106644 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.220 106644 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.220 106644 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.220 106644 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.220 106644 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.220 106644 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.221 106644 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.221 106644 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.221 106644 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.221 106644 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.221 106644 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.222 106644 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.222 106644 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.222 106644 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.222 106644 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.222 106644 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.222 106644 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.223 106644 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.223 106644 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.223 106644 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.223 106644 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.223 106644 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.224 106644 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.224 106644 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.224 106644 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.224 106644 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.225 106644 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.225 106644 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.225 106644 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.225 106644 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.225 106644 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.225 106644 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.225 106644 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.226 106644 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.226 106644 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.226 106644 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.226 106644 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.226 106644 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.226 106644 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.226 106644 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.227 106644 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.227 106644 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.227 106644 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.227 106644 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.227 106644 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.227 106644 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.228 106644 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.228 106644 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.228 106644 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.228 106644 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.228 106644 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.228 106644 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.229 106644 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.229 106644 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.229 106644 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.229 106644 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.229 106644 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.230 106644 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.230 106644 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.230 106644 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.230 106644 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.230 106644 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.230 106644 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.231 106644 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.231 106644 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.231 106644 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.231 106644 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.231 106644 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.231 106644 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.231 106644 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.232 106644 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.232 106644 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.232 106644 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.232 106644 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.232 106644 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.232 106644 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.232 106644 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.233 106644 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.233 106644 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.233 106644 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.233 106644 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.233 106644 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.233 106644 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.233 106644 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.233 106644 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.239 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.239 106644 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.239 106644 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.239 106644 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.239 106644 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:31:19 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.239 106644 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 09 10:31:21 compute-0 sshd-session[106789]: Accepted publickey for zuul from 192.168.122.30 port 52204 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:31:21 compute-0 systemd-logind[806]: New session 23 of user zuul.
Dec 09 10:31:21 compute-0 systemd[1]: Started Session 23 of User zuul.
Dec 09 10:31:21 compute-0 sshd-session[106789]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:31:22 compute-0 python3.9[106942]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:31:24 compute-0 sudo[107096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxqlymlfhqhxhxyfdgshnjvihludkrlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276283.6092696-34-228419413067649/AnsiballZ_command.py'
Dec 09 10:31:24 compute-0 sudo[107096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:24 compute-0 python3.9[107098]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:31:24 compute-0 sudo[107096]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:25 compute-0 sudo[107261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfqcrzeaupmoqnkqkkbmhvilunpbpbqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276284.895118-45-266737084319740/AnsiballZ_systemd_service.py'
Dec 09 10:31:25 compute-0 sudo[107261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:25 compute-0 python3.9[107263]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:31:25 compute-0 systemd[1]: Reloading.
Dec 09 10:31:25 compute-0 systemd-rc-local-generator[107289]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:31:25 compute-0 systemd-sysv-generator[107292]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:31:26 compute-0 sudo[107261]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:26 compute-0 python3.9[107447]: ansible-ansible.builtin.service_facts Invoked
Dec 09 10:31:27 compute-0 network[107464]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 10:31:27 compute-0 network[107465]: 'network-scripts' will be removed from distribution in near future.
Dec 09 10:31:27 compute-0 network[107466]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 10:31:29 compute-0 sshd-session[107545]: Invalid user admin from 159.223.8.217 port 42026
Dec 09 10:31:29 compute-0 sshd-session[107545]: Connection closed by invalid user admin 159.223.8.217 port 42026 [preauth]
Dec 09 10:31:30 compute-0 sudo[107727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhentkrbkfddwdfjkgytamyfuoatgtwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276290.0741205-64-251756434191368/AnsiballZ_systemd_service.py'
Dec 09 10:31:30 compute-0 sudo[107727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:30 compute-0 python3.9[107729]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:31:30 compute-0 sudo[107727]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:31 compute-0 sudo[107880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdnndnesyfjopsfzgrkxvyabdpoduzvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276290.9603643-64-156760868268246/AnsiballZ_systemd_service.py'
Dec 09 10:31:31 compute-0 sudo[107880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:31 compute-0 python3.9[107882]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:31:32 compute-0 sudo[107880]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:33 compute-0 sudo[108033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeshejuycipsarackuzmuldqijmirwdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276292.766381-64-96612299255813/AnsiballZ_systemd_service.py'
Dec 09 10:31:33 compute-0 sudo[108033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:33 compute-0 python3.9[108035]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:31:33 compute-0 sudo[108033]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:34 compute-0 sudo[108186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxjdfhbclamqpthahfmzozwstkwblcwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276293.720971-64-267010386133594/AnsiballZ_systemd_service.py'
Dec 09 10:31:34 compute-0 sudo[108186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:34 compute-0 python3.9[108188]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:31:34 compute-0 sudo[108186]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:34 compute-0 sudo[108339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbjdltbxgkyqwspubbtvrlsqgcddhsac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276294.5262868-64-139906262804200/AnsiballZ_systemd_service.py'
Dec 09 10:31:34 compute-0 sudo[108339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:35 compute-0 python3.9[108341]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:31:35 compute-0 sudo[108339]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:35 compute-0 sudo[108492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toxzwfjsklwkzvnvuynwlgyqvkoyfmeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276295.3329818-64-19854534738351/AnsiballZ_systemd_service.py'
Dec 09 10:31:35 compute-0 sudo[108492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:36 compute-0 python3.9[108494]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:31:36 compute-0 sudo[108492]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:36 compute-0 sudo[108645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igzlfqyxgvodvocfufsyxbhydfepoaws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276296.4011798-64-147465420109660/AnsiballZ_systemd_service.py'
Dec 09 10:31:36 compute-0 sudo[108645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:36 compute-0 python3.9[108647]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:31:36 compute-0 sudo[108645]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:38 compute-0 sudo[108798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcxaflxgppcsrcvkwcbiekbkituzjria ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276297.8942435-116-233125850631515/AnsiballZ_file.py'
Dec 09 10:31:38 compute-0 sudo[108798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:38 compute-0 python3.9[108800]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:38 compute-0 sudo[108798]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:39 compute-0 sudo[108950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdrftvwbczoazuwjxvkzjoonxnxunajv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276298.7427046-116-151786295352344/AnsiballZ_file.py'
Dec 09 10:31:39 compute-0 sudo[108950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:39 compute-0 python3.9[108952]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:39 compute-0 sudo[108950]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:39 compute-0 sudo[109102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnjynrsszbogfgcchfyswvvalkrakhju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276299.5461066-116-227834966115229/AnsiballZ_file.py'
Dec 09 10:31:39 compute-0 sudo[109102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:40 compute-0 python3.9[109104]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:40 compute-0 sudo[109102]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:40 compute-0 sudo[109254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nujjsnxilzckmdjaifandtwtzxkfeqnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276300.196376-116-207893071145506/AnsiballZ_file.py'
Dec 09 10:31:40 compute-0 sudo[109254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:40 compute-0 python3.9[109256]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:40 compute-0 sudo[109254]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:41 compute-0 sudo[109406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clcqtwgvhxyqgrauiqnesbndqfwhjewq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276300.7997668-116-121801897438750/AnsiballZ_file.py'
Dec 09 10:31:41 compute-0 sudo[109406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:41 compute-0 python3.9[109408]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:41 compute-0 sudo[109406]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:41 compute-0 sudo[109558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjktfehvbtpbxysoypczcbdtnqtsyxti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276301.4971867-116-41523298862741/AnsiballZ_file.py'
Dec 09 10:31:41 compute-0 sudo[109558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:41 compute-0 python3.9[109560]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:41 compute-0 sudo[109558]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:42 compute-0 sudo[109710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drtxwypghppqfqqbyisbaspkdswhjluc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276302.1237025-116-222762779758349/AnsiballZ_file.py'
Dec 09 10:31:42 compute-0 sudo[109710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:42 compute-0 python3.9[109712]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:42 compute-0 sudo[109710]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:43 compute-0 sudo[109862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqqgsgmhrpfnslcnmyjmepkmaxdkkjbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276302.784078-166-116767213777433/AnsiballZ_file.py'
Dec 09 10:31:43 compute-0 sudo[109862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:43 compute-0 python3.9[109864]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:43 compute-0 sudo[109862]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:43 compute-0 sudo[110014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqscteszxjjuwhnzgisyrudafvvcaueo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276303.365158-166-243902621430873/AnsiballZ_file.py'
Dec 09 10:31:43 compute-0 sudo[110014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:43 compute-0 python3.9[110016]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:43 compute-0 sudo[110014]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:44 compute-0 sudo[110166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkukgmncuxnfdodxsvrdienympwdvguc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276304.1283393-166-273072500324703/AnsiballZ_file.py'
Dec 09 10:31:44 compute-0 sudo[110166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:44 compute-0 python3.9[110168]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:44 compute-0 sudo[110166]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:45 compute-0 sudo[110318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wippgocqtnphawnjkxlifbgjrnnsbbnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276304.7574732-166-145595122119325/AnsiballZ_file.py'
Dec 09 10:31:45 compute-0 sudo[110318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:45 compute-0 python3.9[110320]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:45 compute-0 sudo[110318]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:45 compute-0 sudo[110481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqeowdgrezqdeqimhqbdeqsvxikynelx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276305.4376442-166-81601761769739/AnsiballZ_file.py'
Dec 09 10:31:45 compute-0 sudo[110481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:45 compute-0 podman[110444]: 2025-12-09 10:31:45.765030632 +0000 UTC m=+0.082174050 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 09 10:31:45 compute-0 python3.9[110487]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:45 compute-0 sudo[110481]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:46 compute-0 sudo[110640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imwzjvdcavyzbtmylermzjhnricqoqox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276306.1959445-166-68361881559020/AnsiballZ_file.py'
Dec 09 10:31:46 compute-0 sudo[110640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:46 compute-0 python3.9[110642]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:46 compute-0 sudo[110640]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:47 compute-0 sudo[110792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqsgvtwnkjkiwqljtzycxdnitjrkqgif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276306.8236768-166-49482634459410/AnsiballZ_file.py'
Dec 09 10:31:47 compute-0 sudo[110792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:47 compute-0 python3.9[110794]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:31:47 compute-0 sudo[110792]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:47 compute-0 sudo[110944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etfxvyslyrunqytjeasznkivzwlqgros ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276307.566933-217-70640295588305/AnsiballZ_command.py'
Dec 09 10:31:47 compute-0 sudo[110944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:48 compute-0 python3.9[110946]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:31:48 compute-0 sudo[110944]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:49 compute-0 python3.9[111098]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 09 10:31:49 compute-0 sudo[111268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlvvhbulwialzaufefcfuneiieacewjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276309.6309283-235-56905048920899/AnsiballZ_systemd_service.py'
Dec 09 10:31:49 compute-0 sudo[111268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:49 compute-0 podman[111215]: 2025-12-09 10:31:49.930418537 +0000 UTC m=+0.085668489 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 09 10:31:50 compute-0 python3.9[111275]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:31:50 compute-0 systemd[1]: Reloading.
Dec 09 10:31:50 compute-0 systemd-rc-local-generator[111300]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:31:50 compute-0 systemd-sysv-generator[111305]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:31:50 compute-0 sudo[111268]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:51 compute-0 sudo[111462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdmdbwbvnnyvgojnrvsnkckrmihbqwtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276310.8563285-243-246914878479078/AnsiballZ_command.py'
Dec 09 10:31:51 compute-0 sudo[111462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:51 compute-0 python3.9[111464]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:31:51 compute-0 sudo[111462]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:51 compute-0 sudo[111615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbymzbshatwwipmhrldpdbhqntyquthz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276311.4759662-243-42865037904819/AnsiballZ_command.py'
Dec 09 10:31:51 compute-0 sudo[111615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:51 compute-0 python3.9[111617]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:31:51 compute-0 sudo[111615]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:52 compute-0 sudo[111768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyzsboefjobauxeweqpzuwpolqybcbwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276312.492174-243-50639674639662/AnsiballZ_command.py'
Dec 09 10:31:52 compute-0 sudo[111768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:52 compute-0 python3.9[111770]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:31:52 compute-0 sudo[111768]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:53 compute-0 sudo[111921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sorhchjtvmodllmzzjyffpaumpgfgnvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276313.086435-243-94008773849/AnsiballZ_command.py'
Dec 09 10:31:53 compute-0 sudo[111921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:53 compute-0 python3.9[111923]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:31:53 compute-0 sudo[111921]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:54 compute-0 sudo[112074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcczpdiysmgditalyncloirqkzjtdvsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276313.7802675-243-153857303441005/AnsiballZ_command.py'
Dec 09 10:31:54 compute-0 sudo[112074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:54 compute-0 python3.9[112076]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:31:54 compute-0 sudo[112074]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:54 compute-0 sudo[112227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trfvkuzfnbvmlumtjlxmcrzkjecpacqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276314.3925154-243-200730546620768/AnsiballZ_command.py'
Dec 09 10:31:54 compute-0 sudo[112227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:54 compute-0 python3.9[112229]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:31:54 compute-0 sudo[112227]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:55 compute-0 sudo[112380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfljpbwtwxtclvnsjppxkqiphsiefyvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276315.065094-243-109453060432560/AnsiballZ_command.py'
Dec 09 10:31:55 compute-0 sudo[112380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:55 compute-0 python3.9[112382]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:31:55 compute-0 sudo[112380]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:56 compute-0 sudo[112533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzycjgvrdkuivweqmdzwaegdsvwsfabu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276316.0587265-297-255365195192416/AnsiballZ_getent.py'
Dec 09 10:31:56 compute-0 sudo[112533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:56 compute-0 python3.9[112535]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 09 10:31:56 compute-0 sudo[112533]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:57 compute-0 sudo[112686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptoubzdgqywxlwgljhhmtohzrtcqsxmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276317.1076367-305-87944635403557/AnsiballZ_group.py'
Dec 09 10:31:57 compute-0 sudo[112686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:57 compute-0 python3.9[112688]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 09 10:31:57 compute-0 groupadd[112689]: group added to /etc/group: name=libvirt, GID=42473
Dec 09 10:31:57 compute-0 groupadd[112689]: group added to /etc/gshadow: name=libvirt
Dec 09 10:31:57 compute-0 groupadd[112689]: new group: name=libvirt, GID=42473
Dec 09 10:31:57 compute-0 sudo[112686]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:58 compute-0 sudo[112844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njcvqekwvaopegkvgylrysfnytxguqym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276317.985972-313-140035111940536/AnsiballZ_user.py'
Dec 09 10:31:58 compute-0 sudo[112844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:31:58 compute-0 python3.9[112846]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 09 10:31:58 compute-0 useradd[112848]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Dec 09 10:31:58 compute-0 sudo[112844]: pam_unix(sudo:session): session closed for user root
Dec 09 10:31:59 compute-0 sudo[113004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiovuodhiqhfinoiuzuivaaohehlxwis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276319.2181396-324-60585360073463/AnsiballZ_setup.py'
Dec 09 10:31:59 compute-0 sudo[113004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:32:00 compute-0 python3.9[113006]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 10:32:00 compute-0 sudo[113004]: pam_unix(sudo:session): session closed for user root
Dec 09 10:32:00 compute-0 sudo[113090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvwugxrclmhylfmubqihaqmlacqfaohd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276319.2181396-324-60585360073463/AnsiballZ_dnf.py'
Dec 09 10:32:00 compute-0 sudo[113090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:32:01 compute-0 sshd-session[113059]: Invalid user admin from 159.223.8.217 port 32938
Dec 09 10:32:01 compute-0 sshd-session[113059]: Connection closed by invalid user admin 159.223.8.217 port 32938 [preauth]
Dec 09 10:32:01 compute-0 python3.9[113092]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 10:32:15 compute-0 podman[113103]: 2025-12-09 10:32:15.930631118 +0000 UTC m=+0.086637618 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:32:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:32:16.957 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:32:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:32:16.958 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:32:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:32:16.959 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:32:21 compute-0 podman[113224]: 2025-12-09 10:32:21.020190243 +0000 UTC m=+0.157215255 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 10:32:31 compute-0 sshd-session[113330]: Invalid user backup from 159.223.8.217 port 42812
Dec 09 10:32:32 compute-0 sshd-session[113330]: Connection closed by invalid user backup 159.223.8.217 port 42812 [preauth]
Dec 09 10:32:41 compute-0 kernel: SELinux:  Converting 2757 SID table entries...
Dec 09 10:32:41 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 10:32:41 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 09 10:32:41 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 10:32:41 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 09 10:32:41 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 10:32:41 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 10:32:41 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 10:32:46 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec 09 10:32:47 compute-0 podman[113340]: 2025-12-09 10:32:47.379790329 +0000 UTC m=+0.503219467 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:32:52 compute-0 kernel: SELinux:  Converting 2757 SID table entries...
Dec 09 10:32:52 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 10:32:52 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 09 10:32:52 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 10:32:52 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 09 10:32:52 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 10:32:52 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 10:32:52 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 10:32:52 compute-0 podman[113364]: 2025-12-09 10:32:52.291685164 +0000 UTC m=+0.084647858 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 09 10:33:01 compute-0 sshd-session[113395]: Invalid user backup from 159.223.8.217 port 50980
Dec 09 10:33:01 compute-0 sshd-session[113395]: Connection closed by invalid user backup 159.223.8.217 port 50980 [preauth]
Dec 09 10:33:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:33:16.958 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:33:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:33:16.960 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:33:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:33:16.960 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:33:17 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 09 10:33:17 compute-0 podman[121221]: 2025-12-09 10:33:17.921259224 +0000 UTC m=+0.062432152 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, tcib_managed=true)
Dec 09 10:33:22 compute-0 podman[124403]: 2025-12-09 10:33:22.959636303 +0000 UTC m=+0.122358261 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 09 10:33:31 compute-0 sshd-session[129832]: Invalid user backup from 159.223.8.217 port 60728
Dec 09 10:33:31 compute-0 sshd-session[129832]: Connection closed by invalid user backup 159.223.8.217 port 60728 [preauth]
Dec 09 10:33:49 compute-0 podman[130249]: 2025-12-09 10:33:49.01845792 +0000 UTC m=+0.155113640 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 09 10:33:54 compute-0 podman[130272]: 2025-12-09 10:33:54.05081029 +0000 UTC m=+0.180984235 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 09 10:33:55 compute-0 kernel: SELinux:  Converting 2758 SID table entries...
Dec 09 10:33:55 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 09 10:33:55 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 09 10:33:55 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 09 10:33:55 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 09 10:33:55 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 09 10:33:55 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 09 10:33:55 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 09 10:33:57 compute-0 groupadd[130309]: group added to /etc/group: name=dnsmasq, GID=992
Dec 09 10:33:57 compute-0 groupadd[130309]: group added to /etc/gshadow: name=dnsmasq
Dec 09 10:33:57 compute-0 groupadd[130309]: new group: name=dnsmasq, GID=992
Dec 09 10:33:57 compute-0 useradd[130316]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Dec 09 10:33:57 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Dec 09 10:33:57 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 09 10:33:57 compute-0 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Dec 09 10:33:58 compute-0 groupadd[130329]: group added to /etc/group: name=clevis, GID=991
Dec 09 10:33:58 compute-0 groupadd[130329]: group added to /etc/gshadow: name=clevis
Dec 09 10:33:58 compute-0 groupadd[130329]: new group: name=clevis, GID=991
Dec 09 10:33:58 compute-0 useradd[130336]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Dec 09 10:33:58 compute-0 usermod[130346]: add 'clevis' to group 'tss'
Dec 09 10:33:58 compute-0 usermod[130346]: add 'clevis' to shadow group 'tss'
Dec 09 10:34:00 compute-0 sshd-session[130367]: Invalid user backup from 159.223.8.217 port 44476
Dec 09 10:34:00 compute-0 sshd-session[130367]: Connection closed by invalid user backup 159.223.8.217 port 44476 [preauth]
Dec 09 10:34:01 compute-0 polkitd[43647]: Reloading rules
Dec 09 10:34:01 compute-0 polkitd[43647]: Collecting garbage unconditionally...
Dec 09 10:34:01 compute-0 polkitd[43647]: Loading rules from directory /etc/polkit-1/rules.d
Dec 09 10:34:01 compute-0 polkitd[43647]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 09 10:34:01 compute-0 polkitd[43647]: Finished loading, compiling and executing 3 rules
Dec 09 10:34:01 compute-0 polkitd[43647]: Reloading rules
Dec 09 10:34:01 compute-0 polkitd[43647]: Collecting garbage unconditionally...
Dec 09 10:34:01 compute-0 polkitd[43647]: Loading rules from directory /etc/polkit-1/rules.d
Dec 09 10:34:01 compute-0 polkitd[43647]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 09 10:34:01 compute-0 polkitd[43647]: Finished loading, compiling and executing 3 rules
Dec 09 10:34:03 compute-0 groupadd[130535]: group added to /etc/group: name=ceph, GID=167
Dec 09 10:34:03 compute-0 groupadd[130535]: group added to /etc/gshadow: name=ceph
Dec 09 10:34:03 compute-0 groupadd[130535]: new group: name=ceph, GID=167
Dec 09 10:34:03 compute-0 useradd[130541]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Dec 09 10:34:06 compute-0 sshd[1007]: Received signal 15; terminating.
Dec 09 10:34:06 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Dec 09 10:34:06 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Dec 09 10:34:06 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Dec 09 10:34:06 compute-0 systemd[1]: sshd.service: Consumed 2.403s CPU time, read 32.0K from disk, written 16.0K to disk.
Dec 09 10:34:06 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Dec 09 10:34:06 compute-0 systemd[1]: Stopping sshd-keygen.target...
Dec 09 10:34:06 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 09 10:34:06 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 09 10:34:06 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 09 10:34:06 compute-0 systemd[1]: Reached target sshd-keygen.target.
Dec 09 10:34:06 compute-0 systemd[1]: Starting OpenSSH server daemon...
Dec 09 10:34:06 compute-0 sshd[131060]: Server listening on 0.0.0.0 port 22.
Dec 09 10:34:06 compute-0 sshd[131060]: Server listening on :: port 22.
Dec 09 10:34:06 compute-0 systemd[1]: Started OpenSSH server daemon.
Dec 09 10:34:09 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 10:34:09 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 10:34:09 compute-0 systemd[1]: Reloading.
Dec 09 10:34:09 compute-0 systemd-rc-local-generator[131318]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:34:09 compute-0 systemd-sysv-generator[131321]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:34:09 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 10:34:13 compute-0 sudo[113090]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:14 compute-0 sudo[136688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtgzpphnlsfcqnajqfyrvsvlqjbsjbld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276454.131916-336-14176211125401/AnsiballZ_systemd.py'
Dec 09 10:34:14 compute-0 sudo[136688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:15 compute-0 python3.9[136710]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 10:34:15 compute-0 systemd[1]: Reloading.
Dec 09 10:34:15 compute-0 systemd-rc-local-generator[137200]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:34:15 compute-0 systemd-sysv-generator[137203]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:34:15 compute-0 sudo[136688]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:15 compute-0 sudo[137982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxvsmertjzpokvllwwstwhplosotqytr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276455.5589776-336-94486895014269/AnsiballZ_systemd.py'
Dec 09 10:34:15 compute-0 sudo[137982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:16 compute-0 python3.9[138006]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 10:34:16 compute-0 systemd[1]: Reloading.
Dec 09 10:34:16 compute-0 systemd-rc-local-generator[138556]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:34:16 compute-0 systemd-sysv-generator[138559]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:34:16 compute-0 sudo[137982]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:16 compute-0 sudo[139346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlysywtegkvhpxfjwmrkwxftcqhbfudp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276456.5584502-336-178314143840908/AnsiballZ_systemd.py'
Dec 09 10:34:16 compute-0 sudo[139346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:34:16.960 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:34:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:34:16.961 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:34:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:34:16.962 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:34:17 compute-0 python3.9[139370]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 10:34:17 compute-0 systemd[1]: Reloading.
Dec 09 10:34:17 compute-0 systemd-sysv-generator[139697]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:34:17 compute-0 systemd-rc-local-generator[139694]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:34:17 compute-0 sudo[139346]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:17 compute-0 sudo[140420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkvhhifpztcinmgvhppbkxwjvlcphepw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276457.5728462-336-141465456746534/AnsiballZ_systemd.py'
Dec 09 10:34:17 compute-0 sudo[140420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:17 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 10:34:17 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 10:34:17 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.999s CPU time.
Dec 09 10:34:17 compute-0 systemd[1]: run-r0aaadbf831054fe4b97610d26b19ad4b.service: Deactivated successfully.
Dec 09 10:34:18 compute-0 python3.9[140422]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 10:34:18 compute-0 systemd[1]: Reloading.
Dec 09 10:34:18 compute-0 systemd-sysv-generator[140456]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:34:18 compute-0 systemd-rc-local-generator[140452]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:34:18 compute-0 sudo[140420]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:19 compute-0 sudo[140617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlkrctwbbdkyizplobwmpwjdyfnkqkmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276458.762149-365-157051776421307/AnsiballZ_systemd.py'
Dec 09 10:34:19 compute-0 sudo[140617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:19 compute-0 podman[140586]: 2025-12-09 10:34:19.153876554 +0000 UTC m=+0.086469171 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 09 10:34:19 compute-0 python3.9[140626]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:19 compute-0 systemd[1]: Reloading.
Dec 09 10:34:19 compute-0 systemd-rc-local-generator[140654]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:34:19 compute-0 systemd-sysv-generator[140657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:34:19 compute-0 sudo[140617]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:20 compute-0 sudo[140820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofkhgngulrxdotqhtjfkxnmvsmlvpfze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276459.9690518-365-3367187697952/AnsiballZ_systemd.py'
Dec 09 10:34:20 compute-0 sudo[140820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:20 compute-0 python3.9[140822]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:20 compute-0 systemd[1]: Reloading.
Dec 09 10:34:20 compute-0 systemd-rc-local-generator[140848]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:34:20 compute-0 systemd-sysv-generator[140851]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:34:20 compute-0 sudo[140820]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:21 compute-0 sudo[141010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziyadvnfvbsvgsecevwswnflmmtrtlpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276461.405068-365-96285347148036/AnsiballZ_systemd.py'
Dec 09 10:34:21 compute-0 sudo[141010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:22 compute-0 python3.9[141012]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:22 compute-0 systemd[1]: Reloading.
Dec 09 10:34:22 compute-0 systemd-rc-local-generator[141042]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:34:22 compute-0 systemd-sysv-generator[141046]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:34:22 compute-0 sudo[141010]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:22 compute-0 sudo[141200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgviwpwupkwyacijbjpeaebiesxwjjef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276462.5270438-365-198833702631674/AnsiballZ_systemd.py'
Dec 09 10:34:22 compute-0 sudo[141200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:23 compute-0 python3.9[141202]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:23 compute-0 sudo[141200]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:24 compute-0 sudo[141355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kisffteinlfolfvrpxptaxyyjivhwtmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276463.6623268-365-169339370750765/AnsiballZ_systemd.py'
Dec 09 10:34:24 compute-0 sudo[141355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:25 compute-0 podman[141358]: 2025-12-09 10:34:25.003574795 +0000 UTC m=+0.152777725 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 09 10:34:25 compute-0 python3.9[141357]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:26 compute-0 systemd[1]: Reloading.
Dec 09 10:34:26 compute-0 systemd-rc-local-generator[141414]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:34:26 compute-0 systemd-sysv-generator[141418]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:34:26 compute-0 sudo[141355]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:27 compute-0 sudo[141572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idnulezzucoxgusrshffkwfwqxwwmxjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276467.088991-401-181508405392561/AnsiballZ_systemd.py'
Dec 09 10:34:27 compute-0 sudo[141572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:27 compute-0 python3.9[141574]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 09 10:34:27 compute-0 systemd[1]: Reloading.
Dec 09 10:34:28 compute-0 systemd-rc-local-generator[141603]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:34:28 compute-0 systemd-sysv-generator[141608]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:34:28 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Dec 09 10:34:28 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec 09 10:34:28 compute-0 sudo[141572]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:29 compute-0 sudo[141765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niqfjyzzbdnyazlfqymboqixseaykwqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276468.5093627-409-183619327042332/AnsiballZ_systemd.py'
Dec 09 10:34:29 compute-0 sudo[141765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:29 compute-0 python3.9[141767]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:29 compute-0 sudo[141765]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:29 compute-0 sudo[141922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icvtjlqwyyfpwekcyopdmrjzwjmdrfee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276469.6934624-409-205156218533202/AnsiballZ_systemd.py'
Dec 09 10:34:29 compute-0 sudo[141922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:30 compute-0 python3.9[141924]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:30 compute-0 sudo[141922]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:30 compute-0 sshd-session[141870]: Invalid user backup from 159.223.8.217 port 46802
Dec 09 10:34:30 compute-0 sshd-session[141870]: Connection closed by invalid user backup 159.223.8.217 port 46802 [preauth]
Dec 09 10:34:30 compute-0 sudo[142079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwrevncbhfeibnfbbvpiicrltoiopyko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276470.6325762-409-125180031759170/AnsiballZ_systemd.py'
Dec 09 10:34:30 compute-0 sudo[142079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:31 compute-0 python3.9[142081]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:31 compute-0 sshd-session[142034]: Connection closed by authenticating user root 45.148.10.121 port 46010 [preauth]
Dec 09 10:34:31 compute-0 sudo[142079]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:31 compute-0 sudo[142234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofeihkjodmjchghmjniyrkqaxxibdgzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276471.4447658-409-260558482209311/AnsiballZ_systemd.py'
Dec 09 10:34:31 compute-0 sudo[142234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:32 compute-0 python3.9[142236]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:32 compute-0 sudo[142234]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:32 compute-0 sudo[142389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojvgvswkcauvnktvkfrysrvoeiewxhfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276472.2741396-409-215584638639096/AnsiballZ_systemd.py'
Dec 09 10:34:32 compute-0 sudo[142389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:32 compute-0 python3.9[142391]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:33 compute-0 sudo[142389]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:33 compute-0 sudo[142544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhleewywuwjwcvhzabvvhbvhivppvoth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276473.185853-409-151801963846954/AnsiballZ_systemd.py'
Dec 09 10:34:33 compute-0 sudo[142544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:34 compute-0 python3.9[142546]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:34 compute-0 sudo[142544]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:34 compute-0 sudo[142699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isrjpzruksykwmvppaiaayafkgiydegg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276474.268065-409-97628468814298/AnsiballZ_systemd.py'
Dec 09 10:34:34 compute-0 sudo[142699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:34 compute-0 python3.9[142701]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:34 compute-0 sudo[142699]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:35 compute-0 sudo[142854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ricbhibqfchcueahmjigvubprtqgzqqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276475.0715342-409-233098068843790/AnsiballZ_systemd.py'
Dec 09 10:34:35 compute-0 sudo[142854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:35 compute-0 python3.9[142856]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:35 compute-0 sudo[142854]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:36 compute-0 sudo[143009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwzkbkuxfwbbnbxpbznjawuaduwiomvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276476.0995877-409-107154233293820/AnsiballZ_systemd.py'
Dec 09 10:34:36 compute-0 sudo[143009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:38 compute-0 python3.9[143011]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:39 compute-0 sudo[143009]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:39 compute-0 sudo[143164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cestbpjixpmjeaxpzrythkjqhohixjcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276479.2054567-409-267876810210989/AnsiballZ_systemd.py'
Dec 09 10:34:39 compute-0 sudo[143164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:40 compute-0 python3.9[143166]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:40 compute-0 sudo[143164]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:41 compute-0 sudo[143319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqzsijswtrsvotauuhntoregtaupxzdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276480.4140158-409-231873441609673/AnsiballZ_systemd.py'
Dec 09 10:34:41 compute-0 sudo[143319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:41 compute-0 python3.9[143321]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:41 compute-0 sudo[143319]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:42 compute-0 sudo[143474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwyceufbxqhinsssyhntpzzcqwjsouqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276481.6798124-409-64821331621802/AnsiballZ_systemd.py'
Dec 09 10:34:42 compute-0 sudo[143474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:42 compute-0 python3.9[143476]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:42 compute-0 sudo[143474]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:42 compute-0 sudo[143629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwodjqbltasqounhyfrkhoiorlkcdpyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276482.6789296-409-216542047354156/AnsiballZ_systemd.py'
Dec 09 10:34:42 compute-0 sudo[143629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:43 compute-0 python3.9[143631]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:43 compute-0 sudo[143629]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:43 compute-0 sudo[143784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkhnpjwenitjkytdfbxzmivlnxxgnaim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276483.493632-409-76988248448227/AnsiballZ_systemd.py'
Dec 09 10:34:43 compute-0 sudo[143784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:44 compute-0 python3.9[143786]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 09 10:34:44 compute-0 sudo[143784]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:45 compute-0 sudo[143939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvocnblrzpcygbjzcmtfhdketdzjwakb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276484.8578527-511-102290769724851/AnsiballZ_file.py'
Dec 09 10:34:45 compute-0 sudo[143939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:45 compute-0 python3.9[143941]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:34:45 compute-0 sudo[143939]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:45 compute-0 sudo[144091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwvpsvmeekydvfnpohaeclpxkhhiekrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276485.7135682-511-93582195199868/AnsiballZ_file.py'
Dec 09 10:34:45 compute-0 sudo[144091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:46 compute-0 python3.9[144093]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:34:46 compute-0 sudo[144091]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:46 compute-0 sudo[144243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svyxughfeeunhaygovkxnchbhxoxqzle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276486.4665167-511-229019970230658/AnsiballZ_file.py'
Dec 09 10:34:46 compute-0 sudo[144243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:46 compute-0 python3.9[144245]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:34:46 compute-0 sudo[144243]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:47 compute-0 sudo[144395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iimvuddkggithtoxbnrmaroywhopxnla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276487.103575-511-193345945367510/AnsiballZ_file.py'
Dec 09 10:34:47 compute-0 sudo[144395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:47 compute-0 python3.9[144397]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:34:47 compute-0 sudo[144395]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:48 compute-0 sudo[144547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rofykccrkkppdbgwhouwsxvcoktvjmea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276488.1229532-511-197537814424427/AnsiballZ_file.py'
Dec 09 10:34:48 compute-0 sudo[144547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:48 compute-0 python3.9[144549]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:34:48 compute-0 sudo[144547]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:49 compute-0 sudo[144699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phjpdggkaryudrntmysbxpojmnfvnvgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276488.8294318-511-10973163147664/AnsiballZ_file.py'
Dec 09 10:34:49 compute-0 sudo[144699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:49 compute-0 podman[144701]: 2025-12-09 10:34:49.301735399 +0000 UTC m=+0.092593806 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:34:49 compute-0 python3.9[144702]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:34:49 compute-0 sudo[144699]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:50 compute-0 sudo[144870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snestteejqwkuthfviqknifmfpbubabn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276489.5686135-554-28059122970467/AnsiballZ_stat.py'
Dec 09 10:34:50 compute-0 sudo[144870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:50 compute-0 python3.9[144872]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:34:50 compute-0 sudo[144870]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:51 compute-0 sudo[144995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwrzerowrnrqzdgafmjklpdlbjtopgac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276489.5686135-554-28059122970467/AnsiballZ_copy.py'
Dec 09 10:34:51 compute-0 sudo[144995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:51 compute-0 python3.9[144997]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765276489.5686135-554-28059122970467/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:34:51 compute-0 sudo[144995]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:51 compute-0 sudo[145147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwdcnteefmfvwijharabvvsfzeuajvik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276491.6373978-554-11644743782398/AnsiballZ_stat.py'
Dec 09 10:34:51 compute-0 sudo[145147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:52 compute-0 python3.9[145149]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:34:52 compute-0 sudo[145147]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:52 compute-0 sudo[145272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zssbhlmvrmeipbpmvgtmzsolrdaocmni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276491.6373978-554-11644743782398/AnsiballZ_copy.py'
Dec 09 10:34:52 compute-0 sudo[145272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:52 compute-0 python3.9[145274]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765276491.6373978-554-11644743782398/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:34:52 compute-0 sudo[145272]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:53 compute-0 sudo[145424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhcooebnjymkmjfmwikdjkiblygtqfch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276492.919412-554-275258562983214/AnsiballZ_stat.py'
Dec 09 10:34:53 compute-0 sudo[145424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:53 compute-0 python3.9[145426]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:34:53 compute-0 sudo[145424]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:53 compute-0 sudo[145549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjsbegtmzmlugzktspuscxytjkkafvct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276492.919412-554-275258562983214/AnsiballZ_copy.py'
Dec 09 10:34:53 compute-0 sudo[145549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:54 compute-0 python3.9[145551]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765276492.919412-554-275258562983214/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:34:54 compute-0 sudo[145549]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:54 compute-0 sudo[145701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgmyzlarxbmbfcbbpdelphthzlgsvjyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276494.1842458-554-74339909102518/AnsiballZ_stat.py'
Dec 09 10:34:54 compute-0 sudo[145701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:54 compute-0 python3.9[145703]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:34:54 compute-0 sudo[145701]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:55 compute-0 sudo[145836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whliloyrwzxdqfksijslycqrvwumazhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276494.1842458-554-74339909102518/AnsiballZ_copy.py'
Dec 09 10:34:55 compute-0 sudo[145836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:55 compute-0 podman[145800]: 2025-12-09 10:34:55.135189939 +0000 UTC m=+0.079568334 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 09 10:34:55 compute-0 python3.9[145842]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765276494.1842458-554-74339909102518/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:34:55 compute-0 sudo[145836]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:55 compute-0 sudo[146002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntkbteizxtkjnxpevdgfcbhhsijvqbkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276495.45127-554-72442478740385/AnsiballZ_stat.py'
Dec 09 10:34:55 compute-0 sudo[146002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:55 compute-0 python3.9[146004]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:34:56 compute-0 sudo[146002]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:56 compute-0 sudo[146127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihriufljlwulpzlgmvnhpkxakqoxoofo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276495.45127-554-72442478740385/AnsiballZ_copy.py'
Dec 09 10:34:56 compute-0 sudo[146127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:56 compute-0 python3.9[146129]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765276495.45127-554-72442478740385/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:34:56 compute-0 sudo[146127]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:57 compute-0 sudo[146279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzufwwkfrzvsckwqfepdhcwmhucdsmyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276496.7240787-554-190457117694357/AnsiballZ_stat.py'
Dec 09 10:34:57 compute-0 sudo[146279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:57 compute-0 python3.9[146281]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:34:57 compute-0 sudo[146279]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:57 compute-0 sudo[146404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wodhwtcmlrkxrnvuelqkdijfvnlmnmfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276496.7240787-554-190457117694357/AnsiballZ_copy.py'
Dec 09 10:34:57 compute-0 sudo[146404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:57 compute-0 python3.9[146406]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765276496.7240787-554-190457117694357/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:34:57 compute-0 sudo[146404]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:58 compute-0 sudo[146556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqbbnkwygdvsccsussismygjwavuecfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276498.1114683-554-115404189868431/AnsiballZ_stat.py'
Dec 09 10:34:58 compute-0 sudo[146556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:58 compute-0 python3.9[146558]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:34:58 compute-0 sudo[146556]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:58 compute-0 sudo[146681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgmimvkpcuqwikvluarwcepjnhjpdfuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276498.1114683-554-115404189868431/AnsiballZ_copy.py'
Dec 09 10:34:58 compute-0 sudo[146681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:59 compute-0 sshd-session[146629]: Invalid user backup from 159.223.8.217 port 37828
Dec 09 10:34:59 compute-0 python3.9[146683]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765276498.1114683-554-115404189868431/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:34:59 compute-0 sudo[146681]: pam_unix(sudo:session): session closed for user root
Dec 09 10:34:59 compute-0 sshd-session[146629]: Connection closed by invalid user backup 159.223.8.217 port 37828 [preauth]
Dec 09 10:34:59 compute-0 sudo[146833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgroahegnjjwteraixlfzsuxawhvroyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276499.3246698-554-170719578275275/AnsiballZ_stat.py'
Dec 09 10:34:59 compute-0 sudo[146833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:34:59 compute-0 python3.9[146835]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:34:59 compute-0 sudo[146833]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:00 compute-0 sudo[146958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tilgzesklefpkhplwwendwmjamhfuchh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276499.3246698-554-170719578275275/AnsiballZ_copy.py'
Dec 09 10:35:00 compute-0 sudo[146958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:00 compute-0 python3.9[146960]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765276499.3246698-554-170719578275275/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:00 compute-0 sudo[146958]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:00 compute-0 sudo[147110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywfszpjcoesrmwcbafuokfhekupjwshs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276500.5844736-667-227431908447187/AnsiballZ_command.py'
Dec 09 10:35:00 compute-0 sudo[147110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:01 compute-0 python3.9[147112]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 09 10:35:01 compute-0 sudo[147110]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:01 compute-0 sudo[147263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnfodwzossxbbcmyzlzkexexupecbwyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276501.2710078-676-200113840556098/AnsiballZ_file.py'
Dec 09 10:35:01 compute-0 sudo[147263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:01 compute-0 python3.9[147265]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:01 compute-0 sudo[147263]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:02 compute-0 sudo[147415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvdvseepltbfeuctralnfbscoksxcmxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276501.9883626-676-42193847533148/AnsiballZ_file.py'
Dec 09 10:35:02 compute-0 sudo[147415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:02 compute-0 python3.9[147417]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:02 compute-0 sudo[147415]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:03 compute-0 sudo[147567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysxpkjwisjnefqwhucmsolgluomhxcsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276502.775606-676-189163015677992/AnsiballZ_file.py'
Dec 09 10:35:03 compute-0 sudo[147567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:03 compute-0 python3.9[147569]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:03 compute-0 sudo[147567]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:03 compute-0 sudo[147719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqsqhhickpakyajuntgybfelhkmmrzwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276503.557169-676-91147065965348/AnsiballZ_file.py'
Dec 09 10:35:03 compute-0 sudo[147719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:04 compute-0 python3.9[147721]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:04 compute-0 sudo[147719]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:04 compute-0 sudo[147871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipbqhoqbskmmhmsvuuzlmijayzxretux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276504.3008294-676-199092930448611/AnsiballZ_file.py'
Dec 09 10:35:04 compute-0 sudo[147871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:04 compute-0 python3.9[147873]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:04 compute-0 sudo[147871]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:05 compute-0 sudo[148023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfzgzzgngxupjflgdguconjmigccullg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276504.925004-676-236373038371588/AnsiballZ_file.py'
Dec 09 10:35:05 compute-0 sudo[148023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:05 compute-0 python3.9[148025]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:05 compute-0 sudo[148023]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:05 compute-0 sudo[148175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvvyycxikjhzctjiarnutvgyjcswctgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276505.6081362-676-232461639438678/AnsiballZ_file.py'
Dec 09 10:35:05 compute-0 sudo[148175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:06 compute-0 python3.9[148177]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:06 compute-0 sudo[148175]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:06 compute-0 sudo[148327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygjvtjpnpzkfrtmxjgwpemzykgajeysn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276506.2883453-676-267585204452438/AnsiballZ_file.py'
Dec 09 10:35:06 compute-0 sudo[148327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:06 compute-0 python3.9[148329]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:07 compute-0 sudo[148327]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:07 compute-0 sudo[148479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqwgqfrzrilexulxrrmpfpvnoijoefte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276507.1515293-676-77504653836937/AnsiballZ_file.py'
Dec 09 10:35:07 compute-0 sudo[148479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:07 compute-0 python3.9[148481]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:07 compute-0 sudo[148479]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:08 compute-0 sudo[148631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axmhvdcnurjeskqmorxomffjlhxrryjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276507.7673392-676-261381918799862/AnsiballZ_file.py'
Dec 09 10:35:08 compute-0 sudo[148631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:08 compute-0 python3.9[148633]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:08 compute-0 sudo[148631]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:08 compute-0 sudo[148783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrvcsizihdvvltuqkqhnxvhbunqubyzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276508.4076426-676-246820247881566/AnsiballZ_file.py'
Dec 09 10:35:08 compute-0 sudo[148783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:08 compute-0 python3.9[148785]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:08 compute-0 sudo[148783]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:09 compute-0 sudo[148935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggvwwmvtfulbkweyekufijzebskueonp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276509.098317-676-244785368723653/AnsiballZ_file.py'
Dec 09 10:35:09 compute-0 sudo[148935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:09 compute-0 python3.9[148937]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:09 compute-0 sudo[148935]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:10 compute-0 sudo[149087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttonhegwtkkwgwysnosicgwhwjowhpvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276509.7323382-676-43316259752812/AnsiballZ_file.py'
Dec 09 10:35:10 compute-0 sudo[149087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:10 compute-0 python3.9[149089]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:10 compute-0 sudo[149087]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:10 compute-0 sudo[149239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nutflooyyaetpwzappqhcwkixjoftsoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276510.3638546-676-98325681116072/AnsiballZ_file.py'
Dec 09 10:35:10 compute-0 sudo[149239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:10 compute-0 python3.9[149241]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:10 compute-0 sudo[149239]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:11 compute-0 sudo[149391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zabkxpweuvadvduvbfogzltmadxsqfrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276511.0403106-775-51596286416993/AnsiballZ_stat.py'
Dec 09 10:35:11 compute-0 sudo[149391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:11 compute-0 python3.9[149393]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:11 compute-0 sudo[149391]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:11 compute-0 sudo[149514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgcpokamxppnpersugtguyydfwkvcgsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276511.0403106-775-51596286416993/AnsiballZ_copy.py'
Dec 09 10:35:11 compute-0 sudo[149514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:12 compute-0 python3.9[149516]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276511.0403106-775-51596286416993/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:12 compute-0 sudo[149514]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:12 compute-0 sudo[149666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvvmilzthzreucwjzjsfubcmzzrcgibg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276512.3722663-775-166255459610076/AnsiballZ_stat.py'
Dec 09 10:35:12 compute-0 sudo[149666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:12 compute-0 python3.9[149668]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:12 compute-0 sudo[149666]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:13 compute-0 sudo[149789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyxysftpcszqfcsridocbecifkampjlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276512.3722663-775-166255459610076/AnsiballZ_copy.py'
Dec 09 10:35:13 compute-0 sudo[149789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:13 compute-0 python3.9[149791]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276512.3722663-775-166255459610076/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:13 compute-0 sudo[149789]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:13 compute-0 sudo[149941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gihjasrsfashzollqysktjzcghdrmraj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276513.6233308-775-33387460043987/AnsiballZ_stat.py'
Dec 09 10:35:13 compute-0 sudo[149941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:14 compute-0 python3.9[149943]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:14 compute-0 sudo[149941]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:14 compute-0 sudo[150064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etmbgrmphxharxbgwidrcufzjfspysbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276513.6233308-775-33387460043987/AnsiballZ_copy.py'
Dec 09 10:35:14 compute-0 sudo[150064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:14 compute-0 python3.9[150066]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276513.6233308-775-33387460043987/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:14 compute-0 sudo[150064]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:15 compute-0 sudo[150216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrbhulmfcnpamqjkrpbjchbabrbgfuzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276514.9064238-775-206647001950809/AnsiballZ_stat.py'
Dec 09 10:35:15 compute-0 sudo[150216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:15 compute-0 python3.9[150218]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:15 compute-0 sudo[150216]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:15 compute-0 sudo[150339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kujqwsbkqxvwmjiryyschaiywezfusib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276514.9064238-775-206647001950809/AnsiballZ_copy.py'
Dec 09 10:35:15 compute-0 sudo[150339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:15 compute-0 python3.9[150341]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276514.9064238-775-206647001950809/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:16 compute-0 sudo[150339]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:16 compute-0 sudo[150491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybzmbuxbgblbmxoplgidebjjpbysaich ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276516.1608899-775-36435593586137/AnsiballZ_stat.py'
Dec 09 10:35:16 compute-0 sudo[150491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:16 compute-0 python3.9[150493]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:16 compute-0 sudo[150491]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:35:16.961 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:35:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:35:16.963 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:35:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:35:16.963 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:35:17 compute-0 sudo[150614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rklzpfzlbffygoepxelamsxmvblrbcvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276516.1608899-775-36435593586137/AnsiballZ_copy.py'
Dec 09 10:35:17 compute-0 sudo[150614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:17 compute-0 python3.9[150616]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276516.1608899-775-36435593586137/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:17 compute-0 sudo[150614]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:17 compute-0 sudo[150766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogugjzyvfjesenwazezkspfwzptwhvvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276517.478315-775-78102357931910/AnsiballZ_stat.py'
Dec 09 10:35:17 compute-0 sudo[150766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:18 compute-0 python3.9[150768]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:18 compute-0 sudo[150766]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:18 compute-0 sudo[150889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxxcnvkvlahjqhookelagukkakpxbnty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276517.478315-775-78102357931910/AnsiballZ_copy.py'
Dec 09 10:35:18 compute-0 sudo[150889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:18 compute-0 python3.9[150891]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276517.478315-775-78102357931910/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:18 compute-0 sudo[150889]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:19 compute-0 sudo[151041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwfenxqjcjbqfehlfnxztjrsqrqeqgsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276518.7774246-775-59977128409655/AnsiballZ_stat.py'
Dec 09 10:35:19 compute-0 sudo[151041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:19 compute-0 python3.9[151043]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:19 compute-0 sudo[151041]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:19 compute-0 sudo[151174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jahiwjpzsammonmoatwdyhmwuevxcade ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276518.7774246-775-59977128409655/AnsiballZ_copy.py'
Dec 09 10:35:19 compute-0 sudo[151174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:19 compute-0 podman[151138]: 2025-12-09 10:35:19.872059892 +0000 UTC m=+0.099713188 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 09 10:35:20 compute-0 python3.9[151177]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276518.7774246-775-59977128409655/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:20 compute-0 sudo[151174]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:20 compute-0 sudo[151336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gywdgsbdwzelifnajnvwttejiopsuuiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276520.2118473-775-243014481057201/AnsiballZ_stat.py'
Dec 09 10:35:20 compute-0 sudo[151336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:20 compute-0 python3.9[151338]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:20 compute-0 sudo[151336]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:21 compute-0 sudo[151459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjvquwbqgdvdrfrcsjwicsfzkainwyqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276520.2118473-775-243014481057201/AnsiballZ_copy.py'
Dec 09 10:35:21 compute-0 sudo[151459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:21 compute-0 python3.9[151461]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276520.2118473-775-243014481057201/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:21 compute-0 sudo[151459]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:21 compute-0 sudo[151611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xggvbfzdqmbzlcjddtahphaihqmzbrfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276521.4038959-775-194393450327516/AnsiballZ_stat.py'
Dec 09 10:35:21 compute-0 sudo[151611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:21 compute-0 python3.9[151613]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:21 compute-0 sudo[151611]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:22 compute-0 sudo[151734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooslevxwjowxwykofmlzhfsyvdusbtdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276521.4038959-775-194393450327516/AnsiballZ_copy.py'
Dec 09 10:35:22 compute-0 sudo[151734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:22 compute-0 python3.9[151736]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276521.4038959-775-194393450327516/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:22 compute-0 sudo[151734]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:22 compute-0 sudo[151886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofqnboibaxqtthzqyclylvuustzowgwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276522.6019351-775-16254896172554/AnsiballZ_stat.py'
Dec 09 10:35:22 compute-0 sudo[151886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:23 compute-0 python3.9[151888]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:23 compute-0 sudo[151886]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:23 compute-0 sudo[152009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzfjtbltuoebotgevbsuleajkogvnenv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276522.6019351-775-16254896172554/AnsiballZ_copy.py'
Dec 09 10:35:23 compute-0 sudo[152009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:23 compute-0 python3.9[152011]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276522.6019351-775-16254896172554/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:23 compute-0 sudo[152009]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:24 compute-0 sudo[152161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uonilnywcfoaahbcfpuxivbmolkdjuxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276523.9281914-775-234042580023202/AnsiballZ_stat.py'
Dec 09 10:35:24 compute-0 sudo[152161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:24 compute-0 python3.9[152163]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:24 compute-0 sudo[152161]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:24 compute-0 sudo[152284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltnjtrrwdinfavwfpgnlaeegspbhnell ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276523.9281914-775-234042580023202/AnsiballZ_copy.py'
Dec 09 10:35:24 compute-0 sudo[152284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:25 compute-0 python3.9[152286]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276523.9281914-775-234042580023202/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:25 compute-0 sudo[152284]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:25 compute-0 sudo[152448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krdqttxrrwvwideejoqhjbfzvyxjwhga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276525.2521908-775-57288425042167/AnsiballZ_stat.py'
Dec 09 10:35:25 compute-0 sudo[152448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:25 compute-0 podman[152410]: 2025-12-09 10:35:25.710699969 +0000 UTC m=+0.174730018 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 09 10:35:25 compute-0 python3.9[152452]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:25 compute-0 sudo[152448]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:26 compute-0 sudo[152585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkepshiokzyhinehetvnbxeizgmtygfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276525.2521908-775-57288425042167/AnsiballZ_copy.py'
Dec 09 10:35:26 compute-0 sudo[152585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:26 compute-0 python3.9[152587]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276525.2521908-775-57288425042167/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:26 compute-0 sudo[152585]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:26 compute-0 sudo[152737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqbwgyghclneseardfsqjwcnoifkrchs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276526.675867-775-95280258718068/AnsiballZ_stat.py'
Dec 09 10:35:26 compute-0 sudo[152737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:27 compute-0 python3.9[152739]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:27 compute-0 sudo[152737]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:27 compute-0 sudo[152860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixlvkntcgxcqjktpchrwaeffcirjcmuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276526.675867-775-95280258718068/AnsiballZ_copy.py'
Dec 09 10:35:27 compute-0 sudo[152860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:27 compute-0 python3.9[152862]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276526.675867-775-95280258718068/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:27 compute-0 sudo[152860]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:28 compute-0 sudo[153014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plriofrrvsuunhpwcwogjxetmajrijcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276528.004999-775-274649665644882/AnsiballZ_stat.py'
Dec 09 10:35:28 compute-0 sudo[153014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:28 compute-0 python3.9[153016]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:28 compute-0 sudo[153014]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:28 compute-0 sshd-session[152938]: Invalid user backup from 159.223.8.217 port 55438
Dec 09 10:35:28 compute-0 sshd-session[152938]: Connection closed by invalid user backup 159.223.8.217 port 55438 [preauth]
Dec 09 10:35:28 compute-0 sudo[153137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edwsdhlqlwbwkibmguwletberjcwmkic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276528.004999-775-274649665644882/AnsiballZ_copy.py'
Dec 09 10:35:28 compute-0 sudo[153137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:29 compute-0 python3.9[153139]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276528.004999-775-274649665644882/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:29 compute-0 sudo[153137]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:29 compute-0 python3.9[153289]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:35:30 compute-0 sudo[153442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rphsmzfryxngwipmvqhrrfijqpcoubch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276529.8789148-981-234810293674092/AnsiballZ_seboolean.py'
Dec 09 10:35:30 compute-0 sudo[153442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:30 compute-0 python3.9[153444]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 09 10:35:31 compute-0 sudo[153442]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:32 compute-0 sudo[153598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igqmjgijyfopnqawrvkjxryzvnqbkchv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276531.9155617-989-219920886399740/AnsiballZ_copy.py'
Dec 09 10:35:32 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec 09 10:35:32 compute-0 sudo[153598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:32 compute-0 python3.9[153600]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:32 compute-0 sudo[153598]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:32 compute-0 sudo[153750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmhaqmnindehzqtrltzjhfwjabdjbpyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276532.4766705-989-199440797978838/AnsiballZ_copy.py'
Dec 09 10:35:32 compute-0 sudo[153750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:32 compute-0 python3.9[153752]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:32 compute-0 sudo[153750]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:33 compute-0 sudo[153902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwofvbfztaxbuqejwcwdfqowxajwtveo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276533.0090885-989-108005085525211/AnsiballZ_copy.py'
Dec 09 10:35:33 compute-0 sudo[153902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:33 compute-0 python3.9[153904]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:33 compute-0 sudo[153902]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:33 compute-0 sudo[154054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iicvljbpsglckwoinckroqausdlfrvog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276533.5917914-989-191824045328291/AnsiballZ_copy.py'
Dec 09 10:35:33 compute-0 sudo[154054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:34 compute-0 python3.9[154056]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:34 compute-0 sudo[154054]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:34 compute-0 sudo[154206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hltmgvunfbgrjniykrqqsdfbybapgzlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276534.2454808-989-255958310543315/AnsiballZ_copy.py'
Dec 09 10:35:34 compute-0 sudo[154206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:34 compute-0 python3.9[154208]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:34 compute-0 sudo[154206]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:35 compute-0 sudo[154358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugixrmdxkntuslynjarhznkjtxxpbmeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276534.8221362-1025-212163098212118/AnsiballZ_copy.py'
Dec 09 10:35:35 compute-0 sudo[154358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:35 compute-0 python3.9[154360]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:35 compute-0 sudo[154358]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:35 compute-0 sudo[154510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kldgdgknhugntsjgggodhixmadmgblqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276535.634121-1025-239515082047795/AnsiballZ_copy.py'
Dec 09 10:35:35 compute-0 sudo[154510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:36 compute-0 python3.9[154512]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:36 compute-0 sudo[154510]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:36 compute-0 sudo[154662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nztzyvdeydetpnxusqpiqslckiskurej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276536.3419685-1025-72719106869303/AnsiballZ_copy.py'
Dec 09 10:35:36 compute-0 sudo[154662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:36 compute-0 python3.9[154664]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:36 compute-0 sudo[154662]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:37 compute-0 sudo[154814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pydsmvnpqbvhafurmbdxwkshgdjwhybh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276537.0793154-1025-141410874198234/AnsiballZ_copy.py'
Dec 09 10:35:37 compute-0 sudo[154814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:37 compute-0 python3.9[154816]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:37 compute-0 sudo[154814]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:38 compute-0 sudo[154966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slonbrdmnhkotricsyfwhatygsxptugk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276537.7333918-1025-150959858162552/AnsiballZ_copy.py'
Dec 09 10:35:38 compute-0 sudo[154966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:38 compute-0 python3.9[154968]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:38 compute-0 sudo[154966]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:38 compute-0 sudo[155118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucuyxcrfabtnntxvfstmstkwsehbmopw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276538.415822-1061-219538753048017/AnsiballZ_systemd.py'
Dec 09 10:35:38 compute-0 sudo[155118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:39 compute-0 python3.9[155120]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:35:39 compute-0 systemd[1]: Reloading.
Dec 09 10:35:39 compute-0 systemd-rc-local-generator[155142]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:35:39 compute-0 systemd-sysv-generator[155152]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:35:39 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Dec 09 10:35:39 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Dec 09 10:35:39 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Dec 09 10:35:39 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 09 10:35:39 compute-0 systemd[1]: Starting libvirt logging daemon...
Dec 09 10:35:39 compute-0 systemd[1]: Started libvirt logging daemon.
Dec 09 10:35:39 compute-0 sudo[155118]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:39 compute-0 sudo[155312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdapmtvbnajhgfcasipgaxgpbvrhserk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276539.5768645-1061-252307656724749/AnsiballZ_systemd.py'
Dec 09 10:35:39 compute-0 sudo[155312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:40 compute-0 python3.9[155314]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:35:40 compute-0 systemd[1]: Reloading.
Dec 09 10:35:40 compute-0 systemd-rc-local-generator[155339]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:35:40 compute-0 systemd-sysv-generator[155345]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:35:40 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Dec 09 10:35:40 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 09 10:35:40 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 09 10:35:40 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 09 10:35:40 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 09 10:35:40 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 09 10:35:40 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 09 10:35:40 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 09 10:35:40 compute-0 sudo[155312]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:41 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 09 10:35:41 compute-0 sudo[155528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abgfswanrspfliansvnsazyzytkhdcsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276540.7856429-1061-70831365538946/AnsiballZ_systemd.py'
Dec 09 10:35:41 compute-0 sudo[155528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:41 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 09 10:35:41 compute-0 python3.9[155530]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:35:41 compute-0 systemd[1]: Reloading.
Dec 09 10:35:41 compute-0 systemd-rc-local-generator[155552]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:35:41 compute-0 systemd-sysv-generator[155559]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:35:41 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 09 10:35:41 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 09 10:35:41 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 09 10:35:41 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 09 10:35:41 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 09 10:35:41 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 09 10:35:41 compute-0 sudo[155528]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:41 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 09 10:35:41 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 09 10:35:42 compute-0 sudo[155748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqowgzpnvsjprvjwsutiwnkmawerzlvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276541.9806173-1061-62522618953986/AnsiballZ_systemd.py'
Dec 09 10:35:42 compute-0 sudo[155748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:42 compute-0 python3.9[155750]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:35:42 compute-0 systemd[1]: Reloading.
Dec 09 10:35:42 compute-0 systemd-rc-local-generator[155770]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:35:42 compute-0 systemd-sysv-generator[155778]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:35:42 compute-0 setroubleshoot[155501]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 3f89051a-4810-4af9-9a87-e4ecee2c22f0
Dec 09 10:35:42 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Dec 09 10:35:42 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Dec 09 10:35:42 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 09 10:35:42 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 09 10:35:42 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 09 10:35:42 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 09 10:35:43 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 09 10:35:43 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 09 10:35:43 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 09 10:35:43 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 09 10:35:43 compute-0 setroubleshoot[155501]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 09 10:35:43 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 09 10:35:43 compute-0 setroubleshoot[155501]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 3f89051a-4810-4af9-9a87-e4ecee2c22f0
Dec 09 10:35:43 compute-0 setroubleshoot[155501]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 09 10:35:43 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 09 10:35:43 compute-0 sudo[155748]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:43 compute-0 sudo[155963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mozvynartvuyvdssqpzbukvzzduhrxjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276543.3233109-1061-78368715935367/AnsiballZ_systemd.py'
Dec 09 10:35:43 compute-0 sudo[155963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:43 compute-0 python3.9[155965]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:35:43 compute-0 systemd[1]: Reloading.
Dec 09 10:35:44 compute-0 systemd-rc-local-generator[155995]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:35:44 compute-0 systemd-sysv-generator[155999]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:35:44 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Dec 09 10:35:44 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Dec 09 10:35:44 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Dec 09 10:35:44 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 09 10:35:44 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 09 10:35:44 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 09 10:35:44 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 09 10:35:44 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 09 10:35:44 compute-0 sudo[155963]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:44 compute-0 sudo[156176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnlhtforhmnofrrnteflliifimgwppmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276544.6013703-1098-98263620620077/AnsiballZ_file.py'
Dec 09 10:35:44 compute-0 sudo[156176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:45 compute-0 python3.9[156178]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:45 compute-0 sudo[156176]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:45 compute-0 sudo[156328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugkldvwosbbvurqtrdaszhvwjvhstwrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276545.2374694-1106-140148338305845/AnsiballZ_find.py'
Dec 09 10:35:45 compute-0 sudo[156328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:45 compute-0 python3.9[156330]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 09 10:35:45 compute-0 sudo[156328]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:46 compute-0 sudo[156480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeefhizsifexvtbnzfntlslfbojftdyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276546.1248589-1120-128413579699312/AnsiballZ_stat.py'
Dec 09 10:35:46 compute-0 sudo[156480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:46 compute-0 python3.9[156482]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:46 compute-0 sudo[156480]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:46 compute-0 sudo[156603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emqhycjxtqsrhwiikgkghyqdvffzbeti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276546.1248589-1120-128413579699312/AnsiballZ_copy.py'
Dec 09 10:35:46 compute-0 sudo[156603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:47 compute-0 python3.9[156605]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276546.1248589-1120-128413579699312/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:47 compute-0 sudo[156603]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:47 compute-0 sudo[156755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipvoqxtnaewwwwucbnozvoneplygbbjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276547.588267-1136-39257512881102/AnsiballZ_file.py'
Dec 09 10:35:47 compute-0 sudo[156755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:48 compute-0 python3.9[156757]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:48 compute-0 sudo[156755]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:48 compute-0 sudo[156907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slpfptimiyjspijbxmjdftnjyspydbbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276548.310738-1144-8093451043198/AnsiballZ_stat.py'
Dec 09 10:35:48 compute-0 sudo[156907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:48 compute-0 python3.9[156909]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:48 compute-0 sudo[156907]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:49 compute-0 sudo[156985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aihhgnhzokqqkznwrikjhnyiignjuejd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276548.310738-1144-8093451043198/AnsiballZ_file.py'
Dec 09 10:35:49 compute-0 sudo[156985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:49 compute-0 python3.9[156987]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:49 compute-0 sudo[156985]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:49 compute-0 sudo[157137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgmyatheygstjvbtzvhqldpmsicsaqlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276549.5291398-1156-165441272238266/AnsiballZ_stat.py'
Dec 09 10:35:49 compute-0 sudo[157137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:50 compute-0 python3.9[157139]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:50 compute-0 sudo[157137]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:50 compute-0 sudo[157227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtjoktmulyplzccqjfnitoodjzlkopvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276549.5291398-1156-165441272238266/AnsiballZ_file.py'
Dec 09 10:35:50 compute-0 sudo[157227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:50 compute-0 podman[157189]: 2025-12-09 10:35:50.386650963 +0000 UTC m=+0.070643352 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:35:50 compute-0 python3.9[157234]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.pv5bv7yw recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:50 compute-0 sudo[157227]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:51 compute-0 sudo[157385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpzeltwusddkbvdysftxrvvidacodnxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276550.7630136-1168-277589218562252/AnsiballZ_stat.py'
Dec 09 10:35:51 compute-0 sudo[157385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:51 compute-0 python3.9[157387]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:51 compute-0 sudo[157385]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:51 compute-0 sudo[157463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apaxuyrzvstmqrtczmeiwfpeizumoyqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276550.7630136-1168-277589218562252/AnsiballZ_file.py'
Dec 09 10:35:51 compute-0 sudo[157463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:51 compute-0 python3.9[157465]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:51 compute-0 sudo[157463]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:52 compute-0 sudo[157615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybamswkzdvowymfuovkuayiknzryjsbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276551.9420168-1181-211212699940709/AnsiballZ_command.py'
Dec 09 10:35:52 compute-0 sudo[157615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:52 compute-0 python3.9[157617]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:35:52 compute-0 sudo[157615]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:53 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 09 10:35:53 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.046s CPU time.
Dec 09 10:35:53 compute-0 sudo[157768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyghcfkfulrmboelxeskuobncnkiyzej ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765276552.6774251-1189-117202966630963/AnsiballZ_edpm_nftables_from_files.py'
Dec 09 10:35:53 compute-0 sudo[157768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:53 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 09 10:35:53 compute-0 python3[157770]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 09 10:35:53 compute-0 sudo[157768]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:53 compute-0 sudo[157920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoyovjrywebstybrzgasjvchtqcnbety ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276553.581732-1197-177949081151128/AnsiballZ_stat.py'
Dec 09 10:35:53 compute-0 sudo[157920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:54 compute-0 python3.9[157922]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:54 compute-0 sudo[157920]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:54 compute-0 sudo[157998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydxwikkfapwqzmltvbktaswaxxgfjjuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276553.581732-1197-177949081151128/AnsiballZ_file.py'
Dec 09 10:35:54 compute-0 sudo[157998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:54 compute-0 python3.9[158000]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:54 compute-0 sudo[157998]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:55 compute-0 sudo[158150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkwnyzgfauwpebsndtzitfjxtploaxkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276554.8144808-1209-83533830199736/AnsiballZ_stat.py'
Dec 09 10:35:55 compute-0 sudo[158150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:55 compute-0 python3.9[158152]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:55 compute-0 sudo[158150]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:55 compute-0 sudo[158228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amvtusuccdcclpfrnmqgdccdtnwclxbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276554.8144808-1209-83533830199736/AnsiballZ_file.py'
Dec 09 10:35:55 compute-0 sudo[158228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:55 compute-0 python3.9[158230]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:55 compute-0 sudo[158228]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:56 compute-0 podman[158254]: 2025-12-09 10:35:56.026287095 +0000 UTC m=+0.178376607 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:35:56 compute-0 sudo[158406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwdgpxwbwwttbhlgejnpxggprykskkgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276556.0508368-1221-49157756238642/AnsiballZ_stat.py'
Dec 09 10:35:56 compute-0 sudo[158406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:56 compute-0 python3.9[158408]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:56 compute-0 sudo[158406]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:56 compute-0 sudo[158484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjnaqagygfynbqkrvffebmgzcnfjlorm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276556.0508368-1221-49157756238642/AnsiballZ_file.py'
Dec 09 10:35:56 compute-0 sudo[158484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:57 compute-0 python3.9[158486]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:57 compute-0 sudo[158484]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:57 compute-0 sshd-session[158487]: Invalid user backup from 159.223.8.217 port 44178
Dec 09 10:35:57 compute-0 sshd-session[158487]: Connection closed by invalid user backup 159.223.8.217 port 44178 [preauth]
Dec 09 10:35:57 compute-0 sudo[158638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mseelelljgkgbkybvoxufleuwxpkeqvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276557.2786825-1233-32463914505397/AnsiballZ_stat.py'
Dec 09 10:35:57 compute-0 sudo[158638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:57 compute-0 python3.9[158640]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:57 compute-0 sudo[158638]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:58 compute-0 sudo[158716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdhnliwzkqximnqlkfsbfagrbecgzjbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276557.2786825-1233-32463914505397/AnsiballZ_file.py'
Dec 09 10:35:58 compute-0 sudo[158716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:58 compute-0 python3.9[158718]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:58 compute-0 sudo[158716]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:59 compute-0 sudo[158868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbkvpsyybsizwetivtebsdctmbgoyfag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276558.5821588-1245-55842328321180/AnsiballZ_stat.py'
Dec 09 10:35:59 compute-0 sudo[158868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:59 compute-0 python3.9[158870]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:35:59 compute-0 sudo[158868]: pam_unix(sudo:session): session closed for user root
Dec 09 10:35:59 compute-0 sudo[158993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gybahumlniudsgdybkyzfumhkpponkld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276558.5821588-1245-55842328321180/AnsiballZ_copy.py'
Dec 09 10:35:59 compute-0 sudo[158993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:35:59 compute-0 python3.9[158995]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276558.5821588-1245-55842328321180/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:35:59 compute-0 sudo[158993]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:00 compute-0 sudo[159145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywmyvvorsbonlcseiafudrdjakqcuctv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276559.9853287-1260-180239917812529/AnsiballZ_file.py'
Dec 09 10:36:00 compute-0 sudo[159145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:00 compute-0 python3.9[159147]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:36:00 compute-0 sudo[159145]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:01 compute-0 sudo[159297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuenopbozrpqbeclrtveaypyhvgvmfmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276560.6863306-1268-77647028591051/AnsiballZ_command.py'
Dec 09 10:36:01 compute-0 sudo[159297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:01 compute-0 python3.9[159299]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:36:01 compute-0 sudo[159297]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:01 compute-0 sudo[159452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgiknvsuemxmlmxeyxxsdnvzwxtqhmdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276561.434631-1276-59496121445140/AnsiballZ_blockinfile.py'
Dec 09 10:36:01 compute-0 sudo[159452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:02 compute-0 python3.9[159454]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:36:02 compute-0 sudo[159452]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:02 compute-0 sudo[159604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgjokhxhvvdvqnvugjehxcwjmbsfnsse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276562.3455045-1285-252796135774863/AnsiballZ_command.py'
Dec 09 10:36:02 compute-0 sudo[159604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:02 compute-0 python3.9[159606]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:36:02 compute-0 sudo[159604]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:03 compute-0 sudo[159757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuvbpbygkhwvkirgijgftyextiuicgbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276563.1018643-1293-232637947856269/AnsiballZ_stat.py'
Dec 09 10:36:03 compute-0 sudo[159757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:03 compute-0 python3.9[159759]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:36:03 compute-0 sudo[159757]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:04 compute-0 sudo[159911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkxajaxnxbobhbiykghdbfzahqoiaytx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276563.8273852-1301-208893136223609/AnsiballZ_command.py'
Dec 09 10:36:04 compute-0 sudo[159911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:04 compute-0 python3.9[159913]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:36:04 compute-0 sudo[159911]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:04 compute-0 sudo[160066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqjghecgkttkkkragzwvfmigzzqmuzhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276564.5783665-1309-141633669429010/AnsiballZ_file.py'
Dec 09 10:36:04 compute-0 sudo[160066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:05 compute-0 python3.9[160068]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:36:05 compute-0 sudo[160066]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:05 compute-0 sudo[160218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oenfjdhvoknghstvihrdynkzbplduxmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276565.25584-1317-85491977375534/AnsiballZ_stat.py'
Dec 09 10:36:05 compute-0 sudo[160218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:05 compute-0 python3.9[160220]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:36:05 compute-0 sudo[160218]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:06 compute-0 sudo[160341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddmjgfgszrraliiqqnvxrvoazsurmmwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276565.25584-1317-85491977375534/AnsiballZ_copy.py'
Dec 09 10:36:06 compute-0 sudo[160341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:06 compute-0 python3.9[160343]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276565.25584-1317-85491977375534/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:36:06 compute-0 sudo[160341]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:06 compute-0 sudo[160493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkvuronivrsvtrcpilhrksqotvdcoufb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276566.5768652-1332-171036422432515/AnsiballZ_stat.py'
Dec 09 10:36:06 compute-0 sudo[160493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:07 compute-0 python3.9[160495]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:36:07 compute-0 sudo[160493]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:07 compute-0 sudo[160616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxtadnypphypyymjesgahdhwatuhnwpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276566.5768652-1332-171036422432515/AnsiballZ_copy.py'
Dec 09 10:36:07 compute-0 sudo[160616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:07 compute-0 python3.9[160618]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276566.5768652-1332-171036422432515/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:36:07 compute-0 sudo[160616]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:08 compute-0 sudo[160768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcgjkgwlsnksdeuqbprudawqzgoibaym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276567.8723955-1347-77985509808563/AnsiballZ_stat.py'
Dec 09 10:36:08 compute-0 sudo[160768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:08 compute-0 python3.9[160770]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:36:08 compute-0 sudo[160768]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:08 compute-0 sudo[160891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsmjczjulfdlemgucktgwrornghywpxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276567.8723955-1347-77985509808563/AnsiballZ_copy.py'
Dec 09 10:36:08 compute-0 sudo[160891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:08 compute-0 python3.9[160893]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276567.8723955-1347-77985509808563/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:36:08 compute-0 sudo[160891]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:09 compute-0 sudo[161043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jidispdnwcbskypbicfclwzobckinxsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276569.0962226-1362-265676283720087/AnsiballZ_systemd.py'
Dec 09 10:36:09 compute-0 sudo[161043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:09 compute-0 python3.9[161045]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:36:09 compute-0 systemd[1]: Reloading.
Dec 09 10:36:09 compute-0 systemd-rc-local-generator[161068]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:36:09 compute-0 systemd-sysv-generator[161071]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:36:09 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Dec 09 10:36:10 compute-0 sudo[161043]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:10 compute-0 sudo[161234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfneckezixycaoqtmmbddiivdflczhdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276570.2012372-1370-97717118587587/AnsiballZ_systemd.py'
Dec 09 10:36:10 compute-0 sudo[161234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:10 compute-0 python3.9[161236]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 09 10:36:10 compute-0 systemd[1]: Reloading.
Dec 09 10:36:10 compute-0 systemd-sysv-generator[161267]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:36:10 compute-0 systemd-rc-local-generator[161264]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:36:11 compute-0 systemd[1]: Reloading.
Dec 09 10:36:11 compute-0 systemd-sysv-generator[161305]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:36:11 compute-0 systemd-rc-local-generator[161302]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:36:11 compute-0 sudo[161234]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:11 compute-0 sshd-session[106792]: Connection closed by 192.168.122.30 port 52204
Dec 09 10:36:11 compute-0 sshd-session[106789]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:36:11 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Dec 09 10:36:11 compute-0 systemd[1]: session-23.scope: Consumed 3min 36.926s CPU time.
Dec 09 10:36:11 compute-0 systemd-logind[806]: Session 23 logged out. Waiting for processes to exit.
Dec 09 10:36:11 compute-0 systemd-logind[806]: Removed session 23.
Dec 09 10:36:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:36:16.963 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:36:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:36:16.966 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:36:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:36:16.967 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:36:17 compute-0 sshd-session[161334]: Accepted publickey for zuul from 192.168.122.30 port 44538 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:36:17 compute-0 systemd-logind[806]: New session 24 of user zuul.
Dec 09 10:36:17 compute-0 systemd[1]: Started Session 24 of User zuul.
Dec 09 10:36:17 compute-0 sshd-session[161334]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:36:18 compute-0 python3.9[161487]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:36:19 compute-0 python3.9[161641]: ansible-ansible.builtin.service_facts Invoked
Dec 09 10:36:19 compute-0 network[161658]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 10:36:19 compute-0 network[161659]: 'network-scripts' will be removed from distribution in near future.
Dec 09 10:36:19 compute-0 network[161660]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 10:36:20 compute-0 podman[161667]: 2025-12-09 10:36:20.54953951 +0000 UTC m=+0.068854074 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 09 10:36:23 compute-0 sudo[161947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyxclkyglksorxwawbzefntsrhfwemot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276583.432443-47-179709563407913/AnsiballZ_setup.py'
Dec 09 10:36:23 compute-0 sudo[161947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:24 compute-0 python3.9[161949]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 10:36:24 compute-0 sudo[161947]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:25 compute-0 sudo[162031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcdszcsovatobsgihfxucnoukuwwqwip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276583.432443-47-179709563407913/AnsiballZ_dnf.py'
Dec 09 10:36:25 compute-0 sudo[162031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:25 compute-0 python3.9[162033]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 10:36:26 compute-0 podman[162037]: 2025-12-09 10:36:26.953027537 +0000 UTC m=+0.106251651 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 10:36:27 compute-0 sshd-session[162064]: Invalid user backup from 159.223.8.217 port 42774
Dec 09 10:36:27 compute-0 sshd-session[162064]: Connection closed by invalid user backup 159.223.8.217 port 42774 [preauth]
Dec 09 10:36:27 compute-0 sshd-session[162035]: Invalid user ubuntu from 117.50.226.213 port 45356
Dec 09 10:36:28 compute-0 sshd-session[162035]: Received disconnect from 117.50.226.213 port 45356:11:  [preauth]
Dec 09 10:36:28 compute-0 sshd-session[162035]: Disconnected from invalid user ubuntu 117.50.226.213 port 45356 [preauth]
Dec 09 10:36:30 compute-0 sudo[162031]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:31 compute-0 sudo[162215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slodjancnvirtpymehklfrzjaqgmiadw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276590.9424613-59-123075229887904/AnsiballZ_stat.py'
Dec 09 10:36:31 compute-0 sudo[162215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:31 compute-0 python3.9[162217]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:36:31 compute-0 sudo[162215]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:32 compute-0 sudo[162367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjdxoexeyypiixpkpcjgmbdgkklprdrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276591.8729959-69-124068746505163/AnsiballZ_command.py'
Dec 09 10:36:32 compute-0 sudo[162367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:32 compute-0 python3.9[162369]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:36:32 compute-0 sudo[162367]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:33 compute-0 sudo[162520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpvcvtulvejerucywdbgdydrqiktfxju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276592.8793445-79-131204523282666/AnsiballZ_stat.py'
Dec 09 10:36:33 compute-0 sudo[162520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:33 compute-0 python3.9[162522]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:36:33 compute-0 sudo[162520]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:33 compute-0 sudo[162672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owgwqywpaltkspzpihizicvmxvvyvcfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276593.610711-87-175512090364001/AnsiballZ_command.py'
Dec 09 10:36:33 compute-0 sudo[162672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:34 compute-0 python3.9[162674]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:36:34 compute-0 sudo[162672]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:34 compute-0 sudo[162825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svpvmtgmkkgwixnidogoakbvwccoblvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276594.3733435-95-232495538045493/AnsiballZ_stat.py'
Dec 09 10:36:34 compute-0 sudo[162825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:34 compute-0 python3.9[162827]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:36:34 compute-0 sudo[162825]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:35 compute-0 sudo[162948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzdkcqyfecgsjnnaundyloaqpkqlxvvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276594.3733435-95-232495538045493/AnsiballZ_copy.py'
Dec 09 10:36:35 compute-0 sudo[162948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:35 compute-0 python3.9[162950]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276594.3733435-95-232495538045493/.source.iscsi _original_basename=.b6im0d01 follow=False checksum=d96ef98e9faa79049d0821e0c39e20fc3cf21a0b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:36:35 compute-0 sudo[162948]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:36 compute-0 sudo[163100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siusfzxdrgzsnjfyimhpvojltubibpnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276595.9657214-110-182447926730240/AnsiballZ_file.py'
Dec 09 10:36:36 compute-0 sudo[163100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:36 compute-0 python3.9[163102]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:36:36 compute-0 sudo[163100]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:37 compute-0 sudo[163252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcombpctosmnyuadjzowilgipdfdmjss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276596.8624861-118-161271907415992/AnsiballZ_lineinfile.py'
Dec 09 10:36:37 compute-0 sudo[163252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:37 compute-0 python3.9[163254]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:36:37 compute-0 sudo[163252]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:37 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 10:36:37 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 10:36:38 compute-0 sudo[163405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvpzcknygvroapbftqmmctulpwxzybhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276597.8710854-127-57433494954611/AnsiballZ_systemd_service.py'
Dec 09 10:36:38 compute-0 sudo[163405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:38 compute-0 python3.9[163407]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:36:38 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 09 10:36:38 compute-0 sudo[163405]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:39 compute-0 sudo[163561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbtrmsotofkwiryhembmohuaqoumordl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276599.1119003-135-58483596514040/AnsiballZ_systemd_service.py'
Dec 09 10:36:39 compute-0 sudo[163561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:39 compute-0 python3.9[163563]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:36:39 compute-0 systemd[1]: Reloading.
Dec 09 10:36:39 compute-0 systemd-rc-local-generator[163592]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:36:39 compute-0 systemd-sysv-generator[163596]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:36:40 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 09 10:36:40 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 09 10:36:40 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Dec 09 10:36:40 compute-0 systemd[1]: Started Open-iSCSI.
Dec 09 10:36:40 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 09 10:36:40 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 09 10:36:40 compute-0 sudo[163561]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:41 compute-0 sudo[163760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdabxdccvhxdcfagkehsiiwjfllaxxrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276600.7004375-146-164811741629392/AnsiballZ_service_facts.py'
Dec 09 10:36:41 compute-0 sudo[163760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:41 compute-0 python3.9[163762]: ansible-ansible.builtin.service_facts Invoked
Dec 09 10:36:41 compute-0 network[163779]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 10:36:41 compute-0 network[163780]: 'network-scripts' will be removed from distribution in near future.
Dec 09 10:36:41 compute-0 network[163781]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 10:36:46 compute-0 sudo[163760]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:46 compute-0 sudo[164052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoqpdsjqcwzvvmvfhezlmriquodytgjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276606.4092276-156-92399292816240/AnsiballZ_file.py'
Dec 09 10:36:46 compute-0 sudo[164052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:46 compute-0 python3.9[164054]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 09 10:36:46 compute-0 sudo[164052]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:47 compute-0 sudo[164204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtkheolignpclzeizrzhyinyeoatouut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276607.1657226-164-46808359513231/AnsiballZ_modprobe.py'
Dec 09 10:36:47 compute-0 sudo[164204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:47 compute-0 python3.9[164206]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 09 10:36:47 compute-0 sudo[164204]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:48 compute-0 sudo[164360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odysfudmysgccfubqhfhfxzszpugtfmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276608.3093913-172-161440376717828/AnsiballZ_stat.py'
Dec 09 10:36:48 compute-0 sudo[164360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:48 compute-0 python3.9[164362]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:36:48 compute-0 sudo[164360]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:49 compute-0 sudo[164483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfulegoswyqmypisvdogawsffvsxlwlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276608.3093913-172-161440376717828/AnsiballZ_copy.py'
Dec 09 10:36:49 compute-0 sudo[164483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:49 compute-0 python3.9[164485]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276608.3093913-172-161440376717828/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:36:49 compute-0 sudo[164483]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:49 compute-0 sudo[164635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcfrdhkpyarrtashklfjwecnhjbmelcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276609.6596737-188-136935627114920/AnsiballZ_lineinfile.py'
Dec 09 10:36:49 compute-0 sudo[164635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:50 compute-0 python3.9[164637]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:36:50 compute-0 sudo[164635]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:50 compute-0 podman[164714]: 2025-12-09 10:36:50.98614361 +0000 UTC m=+0.112131840 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 09 10:36:51 compute-0 sudo[164804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmxtzfjzdatbkrgihiqjewxzsxtpexmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276610.4105241-196-220871819380842/AnsiballZ_systemd.py'
Dec 09 10:36:51 compute-0 sudo[164804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:51 compute-0 python3.9[164806]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:36:51 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 09 10:36:51 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 09 10:36:51 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 09 10:36:51 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 09 10:36:51 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 09 10:36:51 compute-0 sudo[164804]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:52 compute-0 sudo[164960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxldocbifqafdvecmpcexzguynuqbfsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276611.9724672-204-235100834158289/AnsiballZ_file.py'
Dec 09 10:36:52 compute-0 sudo[164960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:52 compute-0 python3.9[164962]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:36:52 compute-0 sudo[164960]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:53 compute-0 sudo[165112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pambjohezfltswnipzunfbsbjroftgjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276612.7477384-213-115368672736766/AnsiballZ_stat.py'
Dec 09 10:36:53 compute-0 sudo[165112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:53 compute-0 python3.9[165114]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:36:53 compute-0 sudo[165112]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:53 compute-0 sudo[165264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iexbdyxzeilndsheuyksjljwvbjxbjtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276613.5074162-222-132392820100489/AnsiballZ_stat.py'
Dec 09 10:36:53 compute-0 sudo[165264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:54 compute-0 python3.9[165266]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:36:54 compute-0 sudo[165264]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:54 compute-0 sudo[165416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aetdxspmpwqgdexcfnswlcxfpyqeratk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276614.2716906-230-241014184696775/AnsiballZ_stat.py'
Dec 09 10:36:54 compute-0 sudo[165416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:54 compute-0 python3.9[165418]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:36:54 compute-0 sudo[165416]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:55 compute-0 sudo[165539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsipxgwexbdpqdworiqpzghomboixchq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276614.2716906-230-241014184696775/AnsiballZ_copy.py'
Dec 09 10:36:55 compute-0 sudo[165539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:55 compute-0 python3.9[165541]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276614.2716906-230-241014184696775/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:36:55 compute-0 sudo[165539]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:55 compute-0 sudo[165691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrbqmjwtjcxroxohnopayxeudmosxjie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276615.687018-245-219388773194970/AnsiballZ_command.py'
Dec 09 10:36:55 compute-0 sudo[165691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:56 compute-0 python3.9[165693]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:36:56 compute-0 sudo[165691]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:56 compute-0 sudo[165844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxsaugsgjosbutlgtrimtbynkmhnpsfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276616.3941824-253-69855726408294/AnsiballZ_lineinfile.py'
Dec 09 10:36:56 compute-0 sudo[165844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:56 compute-0 python3.9[165846]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:36:56 compute-0 sudo[165844]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:57 compute-0 sshd-session[165923]: Invalid user backup from 159.223.8.217 port 55364
Dec 09 10:36:57 compute-0 sudo[166015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzwckeftwrhffkcldcwanoadfuqlixse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276617.0962176-261-251165174710265/AnsiballZ_replace.py'
Dec 09 10:36:57 compute-0 sudo[166015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:57 compute-0 sshd-session[165923]: Connection closed by invalid user backup 159.223.8.217 port 55364 [preauth]
Dec 09 10:36:57 compute-0 podman[165972]: 2025-12-09 10:36:57.833246875 +0000 UTC m=+0.123684844 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 09 10:36:57 compute-0 python3.9[166021]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:36:58 compute-0 sudo[166015]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:58 compute-0 sudo[166176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhhqafuyzosfjznvnbvmsibbhhusmpvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276618.2184482-269-103496332480804/AnsiballZ_replace.py'
Dec 09 10:36:58 compute-0 sudo[166176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:58 compute-0 python3.9[166178]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:36:58 compute-0 sudo[166176]: pam_unix(sudo:session): session closed for user root
Dec 09 10:36:59 compute-0 sudo[166328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrezgpmytbnbsjcifzcebqhxuiijabkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276619.119454-278-170164015672033/AnsiballZ_lineinfile.py'
Dec 09 10:36:59 compute-0 sudo[166328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:36:59 compute-0 python3.9[166330]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:36:59 compute-0 sudo[166328]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:00 compute-0 sudo[166480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wafwkehhpqppkfmcrzfqzpuuincpreqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276619.8554425-278-144429851180019/AnsiballZ_lineinfile.py'
Dec 09 10:37:00 compute-0 sudo[166480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:00 compute-0 python3.9[166482]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:37:00 compute-0 sudo[166480]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:00 compute-0 sudo[166632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zytmebpjznybdfkigechwdpwtyaiyxob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276620.5876482-278-112124505597430/AnsiballZ_lineinfile.py'
Dec 09 10:37:00 compute-0 sudo[166632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:01 compute-0 python3.9[166634]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:37:01 compute-0 sudo[166632]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:01 compute-0 sudo[166784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejcenfpicygdkyylgaedfkxeycaerfhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276621.232025-278-75814998903405/AnsiballZ_lineinfile.py'
Dec 09 10:37:01 compute-0 sudo[166784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:01 compute-0 python3.9[166786]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:37:01 compute-0 sudo[166784]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:02 compute-0 sudo[166936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sihcaobwvviauicgschdbvgaeecwspyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276622.0840507-307-56587296126215/AnsiballZ_stat.py'
Dec 09 10:37:02 compute-0 sudo[166936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:02 compute-0 python3.9[166938]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:37:02 compute-0 sudo[166936]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:03 compute-0 sudo[167090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmqkxmeaewyaivffoawbjnpxjciafpar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276623.2506437-315-159065085708909/AnsiballZ_file.py'
Dec 09 10:37:03 compute-0 sudo[167090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:03 compute-0 python3.9[167092]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:37:03 compute-0 sudo[167090]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:04 compute-0 sudo[167242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kosgttzsxthqzggvdmnzhsmawkqfkcgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276624.257411-324-239241662475857/AnsiballZ_file.py'
Dec 09 10:37:04 compute-0 sudo[167242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:04 compute-0 python3.9[167244]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:37:04 compute-0 sudo[167242]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:05 compute-0 sudo[167394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txrsqevfohtwsshmpylwyzyvmblqgbhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276625.0172074-332-102792668566003/AnsiballZ_stat.py'
Dec 09 10:37:05 compute-0 sudo[167394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:05 compute-0 python3.9[167396]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:37:05 compute-0 sudo[167394]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:06 compute-0 sudo[167472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldcsmdfpudrpgdahqlqbrpeoegioazac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276625.0172074-332-102792668566003/AnsiballZ_file.py'
Dec 09 10:37:06 compute-0 sudo[167472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:06 compute-0 python3.9[167474]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:37:06 compute-0 sudo[167472]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:06 compute-0 sudo[167624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uviibnqpjijzkhsmffrwbcoytxeiiizt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276626.436199-332-99953646975129/AnsiballZ_stat.py'
Dec 09 10:37:06 compute-0 sudo[167624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:06 compute-0 python3.9[167626]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:37:06 compute-0 sudo[167624]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:07 compute-0 sudo[167702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxjmrlbkkqsqaigybnabqsajorvtysqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276626.436199-332-99953646975129/AnsiballZ_file.py'
Dec 09 10:37:07 compute-0 sudo[167702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:07 compute-0 python3.9[167704]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:37:07 compute-0 sudo[167702]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:08 compute-0 sudo[167854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xicupexvaikvdpnwbbwupokvwwwwlhtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276627.690167-355-81574420109673/AnsiballZ_file.py'
Dec 09 10:37:08 compute-0 sudo[167854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:08 compute-0 python3.9[167856]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:37:08 compute-0 sudo[167854]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:09 compute-0 sudo[168006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztflderdxolxterpigsxpnfudrvlqvii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276628.7158167-363-152355944614307/AnsiballZ_stat.py'
Dec 09 10:37:09 compute-0 sudo[168006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:09 compute-0 python3.9[168008]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:37:09 compute-0 sudo[168006]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:09 compute-0 sudo[168084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyhkbxyahtuwjnitowoamdoijzeoqbit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276628.7158167-363-152355944614307/AnsiballZ_file.py'
Dec 09 10:37:09 compute-0 sudo[168084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:09 compute-0 python3.9[168086]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:37:09 compute-0 sudo[168084]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:10 compute-0 sudo[168236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujoqnniuklkygucntdelkcrixugkcqbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276630.0314326-375-114989601158325/AnsiballZ_stat.py'
Dec 09 10:37:10 compute-0 sudo[168236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:10 compute-0 python3.9[168238]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:37:10 compute-0 sudo[168236]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:11 compute-0 sudo[168314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imbawhiiqgolnrwjjqybyepddkdeeqsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276630.0314326-375-114989601158325/AnsiballZ_file.py'
Dec 09 10:37:11 compute-0 sudo[168314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:11 compute-0 python3.9[168316]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:37:11 compute-0 sudo[168314]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:12 compute-0 sudo[168466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrlugxvtknrftgbjrjdnucmjkryipfjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276631.7244234-387-115734131605133/AnsiballZ_systemd.py'
Dec 09 10:37:12 compute-0 sudo[168466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:12 compute-0 python3.9[168468]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:37:12 compute-0 systemd[1]: Reloading.
Dec 09 10:37:12 compute-0 systemd-rc-local-generator[168494]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:37:12 compute-0 systemd-sysv-generator[168497]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:37:12 compute-0 sudo[168466]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:13 compute-0 sudo[168655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzjxubswcljpanawmceoiytszyngitgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276633.1136258-395-103435499156456/AnsiballZ_stat.py'
Dec 09 10:37:13 compute-0 sudo[168655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:13 compute-0 python3.9[168657]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:37:13 compute-0 sudo[168655]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:14 compute-0 sudo[168733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkhogbgcgfeefigoounresfdfzfpzfey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276633.1136258-395-103435499156456/AnsiballZ_file.py'
Dec 09 10:37:14 compute-0 sudo[168733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:14 compute-0 python3.9[168735]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:37:14 compute-0 sudo[168733]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:14 compute-0 sudo[168885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpcogxzcbdnwrspescjtivlgqofpbzzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276634.4991133-407-229868242943755/AnsiballZ_stat.py'
Dec 09 10:37:14 compute-0 sudo[168885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:15 compute-0 python3.9[168887]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:37:15 compute-0 sudo[168885]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:15 compute-0 sudo[168963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyrbltafbwbjfdodqgfvoeqbzhsslshw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276634.4991133-407-229868242943755/AnsiballZ_file.py'
Dec 09 10:37:15 compute-0 sudo[168963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:15 compute-0 python3.9[168965]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:37:15 compute-0 sudo[168963]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:16 compute-0 sudo[169115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tetqbhslmplrfvisjimczqqtinqvdwdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276635.640828-419-55367814962016/AnsiballZ_systemd.py'
Dec 09 10:37:16 compute-0 sudo[169115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:16 compute-0 python3.9[169117]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:37:16 compute-0 systemd[1]: Reloading.
Dec 09 10:37:16 compute-0 systemd-rc-local-generator[169144]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:37:16 compute-0 systemd-sysv-generator[169148]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:37:16 compute-0 systemd[1]: Starting Create netns directory...
Dec 09 10:37:16 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 09 10:37:16 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 09 10:37:16 compute-0 systemd[1]: Finished Create netns directory.
Dec 09 10:37:16 compute-0 sudo[169115]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:37:16.963 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:37:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:37:16.965 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:37:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:37:16.965 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:37:17 compute-0 sudo[169308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpafnxzgspniyfxqquetpsgrqmyvfqji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276637.311472-429-163061329502479/AnsiballZ_file.py'
Dec 09 10:37:17 compute-0 sudo[169308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:17 compute-0 python3.9[169310]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:37:17 compute-0 sudo[169308]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:18 compute-0 sudo[169460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bielbkrpxfrgjtrobvhegplzabuhxeru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276637.9584718-437-18884629550102/AnsiballZ_stat.py'
Dec 09 10:37:18 compute-0 sudo[169460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:18 compute-0 python3.9[169462]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:37:18 compute-0 sudo[169460]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:18 compute-0 sudo[169583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkcnidzctniealpkwkkmnlqkyltolstr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276637.9584718-437-18884629550102/AnsiballZ_copy.py'
Dec 09 10:37:18 compute-0 sudo[169583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:19 compute-0 python3.9[169585]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276637.9584718-437-18884629550102/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:37:19 compute-0 sudo[169583]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:19 compute-0 sudo[169735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wajonylerktunvtgvvsabbsxnygzbbms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276639.5192404-454-166644883578969/AnsiballZ_file.py'
Dec 09 10:37:19 compute-0 sudo[169735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:20 compute-0 python3.9[169737]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:37:20 compute-0 sudo[169735]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:20 compute-0 sudo[169887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdpkaafzcgttyqnnknceidufjswlcwku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276640.2955046-462-6804839964574/AnsiballZ_stat.py'
Dec 09 10:37:20 compute-0 sudo[169887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:20 compute-0 python3.9[169889]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:37:20 compute-0 sudo[169887]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:21 compute-0 sudo[170023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxueolpuzptbwvalsjtsxdlyaudeqple ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276640.2955046-462-6804839964574/AnsiballZ_copy.py'
Dec 09 10:37:21 compute-0 sudo[170023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:21 compute-0 podman[169984]: 2025-12-09 10:37:21.217603569 +0000 UTC m=+0.066332016 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:37:21 compute-0 python3.9[170030]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276640.2955046-462-6804839964574/.source.json _original_basename=.k5wzbzvu follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:37:21 compute-0 sudo[170023]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:21 compute-0 sudo[170181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzlvcmkujerydfxowskoaalohtbwepgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276641.594045-477-97397593920045/AnsiballZ_file.py'
Dec 09 10:37:21 compute-0 sudo[170181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:22 compute-0 python3.9[170183]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:37:22 compute-0 sudo[170181]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:22 compute-0 sudo[170333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nendovrcjksjgzcpgnbbojkxbheygiid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276642.333865-485-7307328205510/AnsiballZ_stat.py'
Dec 09 10:37:22 compute-0 sudo[170333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:22 compute-0 sudo[170333]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:23 compute-0 sudo[170456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oozdgctwtxfvonomryelwcxieyvpemeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276642.333865-485-7307328205510/AnsiballZ_copy.py'
Dec 09 10:37:23 compute-0 sudo[170456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:23 compute-0 sudo[170456]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:24 compute-0 sudo[170608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrwywghzcqozysrjxnfpimjrdwedexzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276643.7384112-502-38017683754171/AnsiballZ_container_config_data.py'
Dec 09 10:37:24 compute-0 sudo[170608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:24 compute-0 python3.9[170610]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 09 10:37:24 compute-0 sudo[170608]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:25 compute-0 sudo[170760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exoootlyumbcewctyucntjggajtlmyxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276645.061098-511-34155151611303/AnsiballZ_container_config_hash.py'
Dec 09 10:37:25 compute-0 sudo[170760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:25 compute-0 python3.9[170762]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 09 10:37:25 compute-0 sudo[170760]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:26 compute-0 sudo[170912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmpvcgifofssnnbgtqlqczvcawnqogkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276646.0322008-520-16458497274527/AnsiballZ_podman_container_info.py'
Dec 09 10:37:26 compute-0 sudo[170912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:26 compute-0 python3.9[170914]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 09 10:37:26 compute-0 sudo[170912]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:28 compute-0 sudo[171107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltlxodxnmivuxckvchwfnlyoybaqxtoa ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765276647.5243242-533-109815877925667/AnsiballZ_edpm_container_manage.py'
Dec 09 10:37:28 compute-0 sudo[171107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:28 compute-0 podman[171065]: 2025-12-09 10:37:28.151596116 +0000 UTC m=+0.114534001 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:37:28 compute-0 python3[171112]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 09 10:37:28 compute-0 podman[171154]: 2025-12-09 10:37:28.660809144 +0000 UTC m=+0.073097276 container create 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:37:28 compute-0 podman[171154]: 2025-12-09 10:37:28.625868091 +0000 UTC m=+0.038156313 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 09 10:37:28 compute-0 python3[171112]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 09 10:37:28 compute-0 sudo[171107]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:29 compute-0 sshd-session[171217]: Invalid user backup from 159.223.8.217 port 55574
Dec 09 10:37:29 compute-0 sudo[171344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfzyxzuclmbjwkcbbcutsnmyazcefzmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276649.0789797-541-18541072045726/AnsiballZ_stat.py'
Dec 09 10:37:29 compute-0 sudo[171344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:29 compute-0 sshd-session[171217]: Connection closed by invalid user backup 159.223.8.217 port 55574 [preauth]
Dec 09 10:37:29 compute-0 python3.9[171346]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:37:29 compute-0 sudo[171344]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:30 compute-0 sudo[171498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cegxsmgkvaovsqjhbfhtckrfnmpkwlim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276649.9736598-550-210958314577975/AnsiballZ_file.py'
Dec 09 10:37:30 compute-0 sudo[171498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:30 compute-0 python3.9[171500]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:37:30 compute-0 sudo[171498]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:30 compute-0 sudo[171574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbbiunsoohkgmuzdebjurjcbwthuurkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276649.9736598-550-210958314577975/AnsiballZ_stat.py'
Dec 09 10:37:30 compute-0 sudo[171574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:31 compute-0 python3.9[171576]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:37:31 compute-0 sudo[171574]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:31 compute-0 sudo[171725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcatewklsyurgshbunvjitmgvixmlxaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276651.1743646-550-279337523479128/AnsiballZ_copy.py'
Dec 09 10:37:31 compute-0 sudo[171725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:31 compute-0 python3.9[171727]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276651.1743646-550-279337523479128/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:37:31 compute-0 sudo[171725]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:32 compute-0 sudo[171801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyiyloefasotwxhndhgzizsqcheagjla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276651.1743646-550-279337523479128/AnsiballZ_systemd.py'
Dec 09 10:37:32 compute-0 sudo[171801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:32 compute-0 python3.9[171803]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:37:32 compute-0 systemd[1]: Reloading.
Dec 09 10:37:32 compute-0 systemd-rc-local-generator[171832]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:37:32 compute-0 systemd-sysv-generator[171837]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:37:32 compute-0 sudo[171801]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:33 compute-0 sudo[171912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iobdrhmmjmvlcdzqmupbutffcnqbvwjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276651.1743646-550-279337523479128/AnsiballZ_systemd.py'
Dec 09 10:37:33 compute-0 sudo[171912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:33 compute-0 python3.9[171914]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:37:33 compute-0 systemd[1]: Reloading.
Dec 09 10:37:33 compute-0 systemd-rc-local-generator[171944]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:37:33 compute-0 systemd-sysv-generator[171948]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:37:33 compute-0 systemd[1]: Starting multipathd container...
Dec 09 10:37:33 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8791dbf415cc62bb7dfa0e9aec615bfb573f9c47e4809e4d87a7aae741087c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 09 10:37:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8791dbf415cc62bb7dfa0e9aec615bfb573f9c47e4809e4d87a7aae741087c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 09 10:37:33 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.
Dec 09 10:37:33 compute-0 podman[171954]: 2025-12-09 10:37:33.85101657 +0000 UTC m=+0.143172236 container init 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 09 10:37:33 compute-0 multipathd[171970]: + sudo -E kolla_set_configs
Dec 09 10:37:33 compute-0 sudo[171977]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 09 10:37:33 compute-0 sudo[171977]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 09 10:37:33 compute-0 sudo[171977]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 09 10:37:33 compute-0 podman[171954]: 2025-12-09 10:37:33.888867545 +0000 UTC m=+0.181023151 container start 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202)
Dec 09 10:37:33 compute-0 podman[171954]: multipathd
Dec 09 10:37:33 compute-0 systemd[1]: Started multipathd container.
Dec 09 10:37:33 compute-0 sudo[171912]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:33 compute-0 multipathd[171970]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 09 10:37:33 compute-0 multipathd[171970]: INFO:__main__:Validating config file
Dec 09 10:37:33 compute-0 multipathd[171970]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 09 10:37:33 compute-0 multipathd[171970]: INFO:__main__:Writing out command to execute
Dec 09 10:37:33 compute-0 sudo[171977]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:33 compute-0 multipathd[171970]: ++ cat /run_command
Dec 09 10:37:33 compute-0 multipathd[171970]: + CMD='/usr/sbin/multipathd -d'
Dec 09 10:37:33 compute-0 multipathd[171970]: + ARGS=
Dec 09 10:37:33 compute-0 multipathd[171970]: + sudo kolla_copy_cacerts
Dec 09 10:37:33 compute-0 sudo[171997]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 09 10:37:33 compute-0 sudo[171997]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 09 10:37:33 compute-0 sudo[171997]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 09 10:37:33 compute-0 sudo[171997]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:33 compute-0 multipathd[171970]: + [[ ! -n '' ]]
Dec 09 10:37:33 compute-0 multipathd[171970]: + . kolla_extend_start
Dec 09 10:37:33 compute-0 multipathd[171970]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 09 10:37:33 compute-0 multipathd[171970]: Running command: '/usr/sbin/multipathd -d'
Dec 09 10:37:33 compute-0 multipathd[171970]: + umask 0022
Dec 09 10:37:33 compute-0 multipathd[171970]: + exec /usr/sbin/multipathd -d
Dec 09 10:37:33 compute-0 podman[171976]: 2025-12-09 10:37:33.990480112 +0000 UTC m=+0.077690436 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:37:33 compute-0 systemd[1]: 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a-5269e27426dc50fc.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 10:37:33 compute-0 systemd[1]: 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a-5269e27426dc50fc.service: Failed with result 'exit-code'.
Dec 09 10:37:34 compute-0 multipathd[171970]: 3278.862522 | --------start up--------
Dec 09 10:37:34 compute-0 multipathd[171970]: 3278.862552 | read /etc/multipath.conf
Dec 09 10:37:34 compute-0 multipathd[171970]: 3278.871968 | path checkers start up
Dec 09 10:37:34 compute-0 python3.9[172160]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:37:35 compute-0 sudo[172312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqcnoviyyttlfxyomblnoutxtgiemiww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276654.8618393-586-21152157547058/AnsiballZ_command.py'
Dec 09 10:37:35 compute-0 sudo[172312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:35 compute-0 python3.9[172314]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:37:35 compute-0 sudo[172312]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:36 compute-0 sudo[172476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvxqcftveksqbfyernzcnbxtfuwxkutr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276655.950373-594-200073900890011/AnsiballZ_systemd.py'
Dec 09 10:37:36 compute-0 sudo[172476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:36 compute-0 python3.9[172478]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:37:36 compute-0 systemd[1]: Stopping multipathd container...
Dec 09 10:37:36 compute-0 multipathd[171970]: 3281.582186 | exit (signal)
Dec 09 10:37:36 compute-0 multipathd[171970]: 3281.582522 | --------shut down-------
Dec 09 10:37:36 compute-0 systemd[1]: libpod-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope: Deactivated successfully.
Dec 09 10:37:36 compute-0 podman[172482]: 2025-12-09 10:37:36.780016097 +0000 UTC m=+0.102748700 container died 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:37:36 compute-0 systemd[1]: 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a-5269e27426dc50fc.timer: Deactivated successfully.
Dec 09 10:37:36 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.
Dec 09 10:37:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a-userdata-shm.mount: Deactivated successfully.
Dec 09 10:37:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c8791dbf415cc62bb7dfa0e9aec615bfb573f9c47e4809e4d87a7aae741087c-merged.mount: Deactivated successfully.
Dec 09 10:37:36 compute-0 podman[172482]: 2025-12-09 10:37:36.839693015 +0000 UTC m=+0.162425598 container cleanup 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:37:36 compute-0 podman[172482]: multipathd
Dec 09 10:37:36 compute-0 podman[172512]: multipathd
Dec 09 10:37:36 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 09 10:37:36 compute-0 systemd[1]: Stopped multipathd container.
Dec 09 10:37:36 compute-0 systemd[1]: Starting multipathd container...
Dec 09 10:37:37 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8791dbf415cc62bb7dfa0e9aec615bfb573f9c47e4809e4d87a7aae741087c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 09 10:37:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8791dbf415cc62bb7dfa0e9aec615bfb573f9c47e4809e4d87a7aae741087c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 09 10:37:37 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.
Dec 09 10:37:37 compute-0 podman[172525]: 2025-12-09 10:37:37.077398669 +0000 UTC m=+0.142121147 container init 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd)
Dec 09 10:37:37 compute-0 multipathd[172540]: + sudo -E kolla_set_configs
Dec 09 10:37:37 compute-0 sudo[172546]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 09 10:37:37 compute-0 sudo[172546]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 09 10:37:37 compute-0 sudo[172546]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 09 10:37:37 compute-0 podman[172525]: 2025-12-09 10:37:37.106434915 +0000 UTC m=+0.171157403 container start 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 09 10:37:37 compute-0 podman[172525]: multipathd
Dec 09 10:37:37 compute-0 systemd[1]: Started multipathd container.
Dec 09 10:37:37 compute-0 multipathd[172540]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 09 10:37:37 compute-0 multipathd[172540]: INFO:__main__:Validating config file
Dec 09 10:37:37 compute-0 multipathd[172540]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 09 10:37:37 compute-0 multipathd[172540]: INFO:__main__:Writing out command to execute
Dec 09 10:37:37 compute-0 sudo[172546]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:37 compute-0 multipathd[172540]: ++ cat /run_command
Dec 09 10:37:37 compute-0 multipathd[172540]: + CMD='/usr/sbin/multipathd -d'
Dec 09 10:37:37 compute-0 multipathd[172540]: + ARGS=
Dec 09 10:37:37 compute-0 multipathd[172540]: + sudo kolla_copy_cacerts
Dec 09 10:37:37 compute-0 sudo[172562]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 09 10:37:37 compute-0 sudo[172562]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 09 10:37:37 compute-0 sudo[172562]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 09 10:37:37 compute-0 sudo[172562]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:37 compute-0 sudo[172476]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:37 compute-0 multipathd[172540]: + [[ ! -n '' ]]
Dec 09 10:37:37 compute-0 multipathd[172540]: + . kolla_extend_start
Dec 09 10:37:37 compute-0 multipathd[172540]: Running command: '/usr/sbin/multipathd -d'
Dec 09 10:37:37 compute-0 multipathd[172540]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 09 10:37:37 compute-0 multipathd[172540]: + umask 0022
Dec 09 10:37:37 compute-0 multipathd[172540]: + exec /usr/sbin/multipathd -d
Dec 09 10:37:37 compute-0 multipathd[172540]: 3282.032835 | --------start up--------
Dec 09 10:37:37 compute-0 multipathd[172540]: 3282.032851 | read /etc/multipath.conf
Dec 09 10:37:37 compute-0 multipathd[172540]: 3282.038844 | path checkers start up
Dec 09 10:37:37 compute-0 podman[172547]: 2025-12-09 10:37:37.204505503 +0000 UTC m=+0.086822763 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:37:37 compute-0 sudo[172728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsfbsyqwytltubbpbzrxklxdwnonkhit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276657.3398535-602-207739448238684/AnsiballZ_file.py'
Dec 09 10:37:37 compute-0 sudo[172728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:37 compute-0 python3.9[172730]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:37:37 compute-0 sudo[172728]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:38 compute-0 sudo[172880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwooyvzbqrxzrhzfcvxjptwlnodkhqhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276658.453599-614-74395334687060/AnsiballZ_file.py'
Dec 09 10:37:38 compute-0 sudo[172880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:39 compute-0 python3.9[172882]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 09 10:37:39 compute-0 sudo[172880]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:39 compute-0 sudo[173032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyrfmzfycxcbvaipsibpvvulyxbhjpjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276659.2802866-622-12986739880927/AnsiballZ_modprobe.py'
Dec 09 10:37:39 compute-0 sudo[173032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:40 compute-0 python3.9[173034]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 09 10:37:40 compute-0 kernel: Key type psk registered
Dec 09 10:37:40 compute-0 sudo[173032]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:40 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 09 10:37:41 compute-0 sudo[173196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uinawlqsflcqdddzbjepabopkoernvhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276660.688303-630-50489531492381/AnsiballZ_stat.py'
Dec 09 10:37:41 compute-0 sudo[173196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:41 compute-0 python3.9[173198]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:37:41 compute-0 sudo[173196]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:41 compute-0 sudo[173319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgrgrtojjmzidrrepoohsocougxjzoit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276660.688303-630-50489531492381/AnsiballZ_copy.py'
Dec 09 10:37:41 compute-0 sudo[173319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:41 compute-0 python3.9[173321]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276660.688303-630-50489531492381/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:37:41 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 09 10:37:41 compute-0 sudo[173319]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:42 compute-0 sudo[173472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqiewadiexxkthrtwjemnqfgxxkynfgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276662.1913188-646-206976814658090/AnsiballZ_lineinfile.py'
Dec 09 10:37:42 compute-0 sudo[173472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:42 compute-0 python3.9[173474]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:37:42 compute-0 sudo[173472]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:43 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Dec 09 10:37:43 compute-0 sudo[173625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjtrfcjgrcwpwrwsuitemfzsfrpmlrpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276663.0308547-654-226679304627447/AnsiballZ_systemd.py'
Dec 09 10:37:43 compute-0 sudo[173625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:44 compute-0 python3.9[173627]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:37:44 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 09 10:37:44 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 09 10:37:44 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 09 10:37:44 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 09 10:37:44 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 09 10:37:44 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 09 10:37:44 compute-0 sudo[173625]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:44 compute-0 sudo[173782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmrqfzpjxdoergigtjcezenrsgjfxtxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276664.5635319-662-228124226828391/AnsiballZ_dnf.py'
Dec 09 10:37:44 compute-0 sudo[173782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:45 compute-0 python3.9[173784]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 10:37:50 compute-0 systemd[1]: Reloading.
Dec 09 10:37:50 compute-0 systemd-rc-local-generator[173814]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:37:50 compute-0 systemd-sysv-generator[173819]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:37:50 compute-0 systemd[1]: Reloading.
Dec 09 10:37:50 compute-0 systemd-sysv-generator[173851]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:37:50 compute-0 systemd-rc-local-generator[173846]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:37:51 compute-0 systemd-logind[806]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 09 10:37:51 compute-0 systemd-logind[806]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 09 10:37:51 compute-0 podman[173893]: 2025-12-09 10:37:51.427798679 +0000 UTC m=+0.063607080 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:37:51 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 09 10:37:51 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 09 10:37:51 compute-0 systemd[1]: Reloading.
Dec 09 10:37:51 compute-0 systemd-rc-local-generator[173968]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:37:51 compute-0 systemd-sysv-generator[173972]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:37:51 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 09 10:37:52 compute-0 sudo[173782]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:52 compute-0 sudo[175262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmmzgvfzconodfyzuwgysmevwoeixhik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276672.5355675-670-61147887390411/AnsiballZ_systemd_service.py'
Dec 09 10:37:52 compute-0 sudo[175262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:52 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 09 10:37:52 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 09 10:37:52 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.550s CPU time.
Dec 09 10:37:52 compute-0 systemd[1]: run-r22cf26c6cdbe4eeab63dde32618f0386.service: Deactivated successfully.
Dec 09 10:37:53 compute-0 python3.9[175268]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:37:53 compute-0 systemd[1]: Stopping Open-iSCSI...
Dec 09 10:37:53 compute-0 iscsid[163602]: iscsid shutting down.
Dec 09 10:37:53 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Dec 09 10:37:53 compute-0 systemd[1]: Stopped Open-iSCSI.
Dec 09 10:37:53 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 09 10:37:53 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 09 10:37:53 compute-0 systemd[1]: Started Open-iSCSI.
Dec 09 10:37:53 compute-0 sudo[175262]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:53 compute-0 python3.9[175423]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:37:54 compute-0 sudo[175577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeisgwvnsnyokfcbzeropdnkiwkiiopb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276674.4071543-688-11282142476921/AnsiballZ_file.py'
Dec 09 10:37:54 compute-0 sudo[175577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:54 compute-0 python3.9[175579]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:37:54 compute-0 sudo[175577]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:55 compute-0 sudo[175729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yriztfaxzqhbscorhjqvnrwrljzbespa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276675.2781296-699-250480748861202/AnsiballZ_systemd_service.py'
Dec 09 10:37:55 compute-0 sudo[175729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:37:55 compute-0 python3.9[175731]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:37:55 compute-0 systemd[1]: Reloading.
Dec 09 10:37:55 compute-0 systemd-sysv-generator[175761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:37:55 compute-0 systemd-rc-local-generator[175755]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:37:56 compute-0 sudo[175729]: pam_unix(sudo:session): session closed for user root
Dec 09 10:37:56 compute-0 python3.9[175916]: ansible-ansible.builtin.service_facts Invoked
Dec 09 10:37:56 compute-0 network[175933]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 10:37:56 compute-0 network[175934]: 'network-scripts' will be removed from distribution in near future.
Dec 09 10:37:56 compute-0 network[175935]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 10:37:58 compute-0 podman[175958]: 2025-12-09 10:37:58.936252048 +0000 UTC m=+0.096418322 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 09 10:38:01 compute-0 sudo[176233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhaoiaibxyauxvuconivivvvgooqmmgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276680.9860764-718-70998749655891/AnsiballZ_systemd_service.py'
Dec 09 10:38:01 compute-0 sudo[176233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:01 compute-0 python3.9[176235]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:38:01 compute-0 sudo[176233]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:01 compute-0 sshd-session[176236]: Invalid user backup from 159.223.8.217 port 44180
Dec 09 10:38:01 compute-0 sshd-session[176236]: Connection closed by invalid user backup 159.223.8.217 port 44180 [preauth]
Dec 09 10:38:01 compute-0 sudo[176388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-payvzppgmowpyxprzjyjynzsiaiujxnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276681.6994405-718-89587191243694/AnsiballZ_systemd_service.py'
Dec 09 10:38:01 compute-0 sudo[176388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:02 compute-0 python3.9[176390]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:38:02 compute-0 sudo[176388]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:02 compute-0 sudo[176541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebxzyjvnrjcitivyankgnolkmbpfxfpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276682.4010892-718-15707815573079/AnsiballZ_systemd_service.py'
Dec 09 10:38:02 compute-0 sudo[176541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:03 compute-0 python3.9[176543]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:38:03 compute-0 sudo[176541]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:03 compute-0 sudo[176694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djpeqtbxwlaeqzdehlawbfqkuvigtfjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276683.2213774-718-218478369329657/AnsiballZ_systemd_service.py'
Dec 09 10:38:03 compute-0 sudo[176694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:03 compute-0 python3.9[176696]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:38:03 compute-0 sudo[176694]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:04 compute-0 sudo[176847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiinoccwsfajzcrrqrouuhfxvdmqrzlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276684.1019778-718-255911650643880/AnsiballZ_systemd_service.py'
Dec 09 10:38:04 compute-0 sudo[176847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:04 compute-0 python3.9[176849]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:38:04 compute-0 sudo[176847]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:05 compute-0 sudo[177000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkwmeekqvwbsvpjndguytcnprjishhjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276684.8841162-718-24325091627601/AnsiballZ_systemd_service.py'
Dec 09 10:38:05 compute-0 sudo[177000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:05 compute-0 python3.9[177002]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:38:05 compute-0 sudo[177000]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:06 compute-0 sudo[177153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxodyyjjxqvoojtyxutvlibmvexeqrfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276685.686213-718-6405521253174/AnsiballZ_systemd_service.py'
Dec 09 10:38:06 compute-0 sudo[177153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:06 compute-0 python3.9[177155]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:38:06 compute-0 sudo[177153]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:06 compute-0 sudo[177306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnrxvcuailjxpbrdgupvjesumvfjlltc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276686.5393577-718-141522193937611/AnsiballZ_systemd_service.py'
Dec 09 10:38:06 compute-0 sudo[177306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:07 compute-0 python3.9[177308]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:38:07 compute-0 sudo[177306]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:07 compute-0 sudo[177474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axtnyfhozhchdcbvarbpvdwipdxvjegx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276687.4590063-777-222152036375191/AnsiballZ_file.py'
Dec 09 10:38:07 compute-0 sudo[177474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:07 compute-0 podman[177433]: 2025-12-09 10:38:07.813091384 +0000 UTC m=+0.061094929 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 09 10:38:07 compute-0 python3.9[177481]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:08 compute-0 sudo[177474]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:08 compute-0 sudo[177631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbgcuvgoqkkxvhouamwuocvwuhjplyba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276688.156245-777-78796487707849/AnsiballZ_file.py'
Dec 09 10:38:08 compute-0 sudo[177631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:08 compute-0 python3.9[177633]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:08 compute-0 sudo[177631]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:09 compute-0 sudo[177783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjnpycaukyhszkxxidvqfgobtcfjjano ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276688.8755035-777-95427832010167/AnsiballZ_file.py'
Dec 09 10:38:09 compute-0 sudo[177783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:09 compute-0 python3.9[177785]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:09 compute-0 sudo[177783]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:09 compute-0 sudo[177935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjcvfdkqljgsmwhdstkkxhvlkglmyhsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276689.608211-777-129307362302416/AnsiballZ_file.py'
Dec 09 10:38:09 compute-0 sudo[177935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:10 compute-0 python3.9[177937]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:10 compute-0 sudo[177935]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:10 compute-0 sudo[178087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usnhxwltyxxdgfmzkrxwivqlcmtqkauc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276690.2538862-777-79973561806492/AnsiballZ_file.py'
Dec 09 10:38:10 compute-0 sudo[178087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:10 compute-0 python3.9[178089]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:10 compute-0 sudo[178087]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:11 compute-0 sudo[178239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-heykrmgewwkwafmozaivieuztcaoqebz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276690.8195832-777-169985849141203/AnsiballZ_file.py'
Dec 09 10:38:11 compute-0 sudo[178239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:11 compute-0 python3.9[178241]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:11 compute-0 sudo[178239]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:11 compute-0 sudo[178391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkxocyervcsjbjllrkajqkdhvbogoepn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276691.4528966-777-40009832850556/AnsiballZ_file.py'
Dec 09 10:38:11 compute-0 sudo[178391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:11 compute-0 python3.9[178393]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:11 compute-0 sudo[178391]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:12 compute-0 sudo[178543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvvlevafqhialxtsobmaeayyxtzbuiwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276692.1050007-777-5118633277878/AnsiballZ_file.py'
Dec 09 10:38:12 compute-0 sudo[178543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:12 compute-0 python3.9[178545]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:12 compute-0 sudo[178543]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:13 compute-0 sudo[178695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhojrjyjcsafbdmqopoenaqlyvngbwim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276692.7904434-834-114014796197860/AnsiballZ_file.py'
Dec 09 10:38:13 compute-0 sudo[178695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:13 compute-0 python3.9[178697]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:13 compute-0 sudo[178695]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:13 compute-0 sudo[178847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlzrqfndsoqbliowefoaeyfdqqdhfkii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276693.4594557-834-259759658423283/AnsiballZ_file.py'
Dec 09 10:38:13 compute-0 sudo[178847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:13 compute-0 python3.9[178849]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:13 compute-0 sudo[178847]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:14 compute-0 sudo[178999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvpomlpvajtzmhlchigcewciimkqluud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276694.0527143-834-187535557142390/AnsiballZ_file.py'
Dec 09 10:38:14 compute-0 sudo[178999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:14 compute-0 python3.9[179001]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:14 compute-0 sudo[178999]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:14 compute-0 sudo[179151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isajbxgpvpzeuihbzotdcybbtgzugwij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276694.6637647-834-48469090206039/AnsiballZ_file.py'
Dec 09 10:38:14 compute-0 sudo[179151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:15 compute-0 python3.9[179153]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:15 compute-0 sudo[179151]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:15 compute-0 sudo[179303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtjseezmdqfndlnmpbcpagebxdaycuta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276695.3126054-834-16492952123707/AnsiballZ_file.py'
Dec 09 10:38:15 compute-0 sudo[179303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:15 compute-0 python3.9[179305]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:15 compute-0 sudo[179303]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:16 compute-0 sudo[179455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udntiikhdjabayytvwpvwnfvqlpbvfxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276696.0223424-834-245067524177163/AnsiballZ_file.py'
Dec 09 10:38:16 compute-0 sudo[179455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:16 compute-0 python3.9[179457]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:16 compute-0 sudo[179455]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:16 compute-0 sudo[179607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpmwfgsfahhrinwltdiofcafobgpdpxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276696.6402626-834-145892205487738/AnsiballZ_file.py'
Dec 09 10:38:16 compute-0 sudo[179607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:38:16.965 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:38:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:38:16.966 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:38:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:38:16.967 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:38:17 compute-0 python3.9[179609]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:17 compute-0 sudo[179607]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:17 compute-0 sudo[179759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqbncnesfjzqxoypdairhuuzbjdaklci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276697.309104-834-48275971402360/AnsiballZ_file.py'
Dec 09 10:38:17 compute-0 sudo[179759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:17 compute-0 python3.9[179761]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:17 compute-0 sudo[179759]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:18 compute-0 sudo[179911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhtpqgjijlixteihaxaqsdgckppjehxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276698.0448487-892-239338626021501/AnsiballZ_command.py'
Dec 09 10:38:18 compute-0 sudo[179911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:18 compute-0 python3.9[179913]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:38:18 compute-0 sudo[179911]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:19 compute-0 python3.9[180065]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 09 10:38:20 compute-0 sudo[180215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usgyfflbzpomxqdzgzfemavxkvqbcens ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276699.7701867-910-175412666206628/AnsiballZ_systemd_service.py'
Dec 09 10:38:20 compute-0 sudo[180215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:20 compute-0 python3.9[180217]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:38:20 compute-0 systemd[1]: Reloading.
Dec 09 10:38:20 compute-0 systemd-rc-local-generator[180241]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:38:20 compute-0 systemd-sysv-generator[180246]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:38:21 compute-0 sudo[180215]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:21 compute-0 podman[180301]: 2025-12-09 10:38:21.896569268 +0000 UTC m=+0.057341843 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:38:22 compute-0 sudo[180423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dboqggqwtieiuqwxedwpfsfineirmdyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276701.8188407-918-257618994568761/AnsiballZ_command.py'
Dec 09 10:38:22 compute-0 sudo[180423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:22 compute-0 python3.9[180425]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:38:22 compute-0 sudo[180423]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:22 compute-0 sudo[180576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cveevoezbekrvwfaekzpanmfjfhrvqtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276702.4745653-918-125432254574785/AnsiballZ_command.py'
Dec 09 10:38:22 compute-0 sudo[180576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:22 compute-0 python3.9[180578]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:38:22 compute-0 sudo[180576]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:23 compute-0 sudo[180729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtzitmquxajrvbvvtstipdhygdtlwpzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276703.1106424-918-180116687390934/AnsiballZ_command.py'
Dec 09 10:38:23 compute-0 sudo[180729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:23 compute-0 python3.9[180731]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:38:23 compute-0 sudo[180729]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:24 compute-0 sudo[180882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgknrboanufvjfljgpmxqmfxajptqbgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276703.774805-918-137695210906968/AnsiballZ_command.py'
Dec 09 10:38:24 compute-0 sudo[180882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:24 compute-0 python3.9[180884]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:38:24 compute-0 sudo[180882]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:24 compute-0 sudo[181035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqsgpoimptowncscvsdczwrjaycmqzzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276704.4543073-918-267803837856751/AnsiballZ_command.py'
Dec 09 10:38:24 compute-0 sudo[181035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:24 compute-0 python3.9[181037]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:38:25 compute-0 sudo[181035]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:25 compute-0 sudo[181188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbfpnrpehjpyjpkpzmhnovlbhawvnsbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276705.1500168-918-245860726418042/AnsiballZ_command.py'
Dec 09 10:38:25 compute-0 sudo[181188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:25 compute-0 python3.9[181190]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:38:25 compute-0 sudo[181188]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:26 compute-0 sudo[181341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbaoijgspphvbvqtglmrmvtqxfdafkma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276705.847403-918-158413884257112/AnsiballZ_command.py'
Dec 09 10:38:26 compute-0 sudo[181341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:26 compute-0 python3.9[181343]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:38:26 compute-0 sudo[181341]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:26 compute-0 sudo[181494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjyigayaiiesnmadhkpcgbncmqnlrykv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276706.559116-918-170920245141352/AnsiballZ_command.py'
Dec 09 10:38:26 compute-0 sudo[181494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:27 compute-0 python3.9[181496]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:38:27 compute-0 sudo[181494]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:28 compute-0 sudo[181647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwrtycxetbltthjthixzpugiyfyqyzcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276708.0035157-997-141521450033893/AnsiballZ_file.py'
Dec 09 10:38:28 compute-0 sudo[181647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:28 compute-0 python3.9[181649]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:28 compute-0 sudo[181647]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:28 compute-0 sudo[181799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eromkvfzmjihgvsdaxsizzskxbwuhakd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276708.6914861-997-152515037329247/AnsiballZ_file.py'
Dec 09 10:38:28 compute-0 sudo[181799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:29 compute-0 podman[181801]: 2025-12-09 10:38:29.138361934 +0000 UTC m=+0.136258012 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:38:29 compute-0 python3.9[181802]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:29 compute-0 sudo[181799]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:29 compute-0 sudo[181978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtdyyemodqrisulfmlywzkdotfrgkmnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276709.3289137-997-74105851316127/AnsiballZ_file.py'
Dec 09 10:38:29 compute-0 sudo[181978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:29 compute-0 python3.9[181980]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:29 compute-0 sudo[181978]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:30 compute-0 sudo[182130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spyjwlulkvudkqmofpndgojvjfimalnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276710.0513935-1019-113039816678949/AnsiballZ_file.py'
Dec 09 10:38:30 compute-0 sudo[182130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:30 compute-0 python3.9[182132]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:30 compute-0 sudo[182130]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:30 compute-0 sudo[182282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omeuhqukmywlqidsejxudjvpzwttxdlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276710.7074199-1019-111724832955622/AnsiballZ_file.py'
Dec 09 10:38:30 compute-0 sudo[182282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:31 compute-0 python3.9[182284]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:31 compute-0 sudo[182282]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:31 compute-0 sudo[182434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktejjvvpvbxdgpevdnljcwlwmijdfcbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276711.3356318-1019-176201554794332/AnsiballZ_file.py'
Dec 09 10:38:31 compute-0 sudo[182434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:31 compute-0 python3.9[182436]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:31 compute-0 sudo[182434]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:32 compute-0 sudo[182586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lflyucsvxkkongymnupwuuvrissggssk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276712.0758862-1019-229958969033802/AnsiballZ_file.py'
Dec 09 10:38:32 compute-0 sudo[182586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:32 compute-0 python3.9[182588]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:32 compute-0 sudo[182586]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:33 compute-0 sudo[182738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulbzlzxzlfazkaspqhrvtlaygznblpmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276712.7154412-1019-280662271986876/AnsiballZ_file.py'
Dec 09 10:38:33 compute-0 sudo[182738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:33 compute-0 python3.9[182740]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:33 compute-0 sudo[182738]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:33 compute-0 sudo[182890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chpoplkknpbxwrcrktbzlryhbmfbrlkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276713.393122-1019-230592394316956/AnsiballZ_file.py'
Dec 09 10:38:33 compute-0 sudo[182890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:33 compute-0 python3.9[182892]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:33 compute-0 sudo[182890]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:34 compute-0 sudo[183044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naiwyldhpyvjxvcffzlorsuypquanmgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276714.0457997-1019-62785113754096/AnsiballZ_file.py'
Dec 09 10:38:34 compute-0 sudo[183044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:34 compute-0 sshd-session[182940]: Invalid user backup from 159.223.8.217 port 56514
Dec 09 10:38:34 compute-0 sshd-session[182940]: Connection closed by invalid user backup 159.223.8.217 port 56514 [preauth]
Dec 09 10:38:34 compute-0 python3.9[183046]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:34 compute-0 sudo[183044]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:38 compute-0 podman[183071]: 2025-12-09 10:38:38.919474272 +0000 UTC m=+0.078320420 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:38:39 compute-0 sudo[183216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjfuxnwnkfvmpxeeszygscttntpwltku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276719.4019623-1188-64639209314183/AnsiballZ_getent.py'
Dec 09 10:38:39 compute-0 sudo[183216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:40 compute-0 python3.9[183218]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 09 10:38:40 compute-0 sudo[183216]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:40 compute-0 sudo[183369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibwdrgyfvyddkzjbmqsfsdbxtwsxledg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276720.3513875-1196-172979775708734/AnsiballZ_group.py'
Dec 09 10:38:40 compute-0 sudo[183369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:41 compute-0 python3.9[183371]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 09 10:38:41 compute-0 groupadd[183372]: group added to /etc/group: name=nova, GID=42436
Dec 09 10:38:41 compute-0 groupadd[183372]: group added to /etc/gshadow: name=nova
Dec 09 10:38:41 compute-0 groupadd[183372]: new group: name=nova, GID=42436
Dec 09 10:38:41 compute-0 sudo[183369]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:41 compute-0 sudo[183527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrfqyzduwxgoqxsrkyengrstfygdxaet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276721.2543936-1204-112097964821611/AnsiballZ_user.py'
Dec 09 10:38:41 compute-0 sudo[183527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:41 compute-0 python3.9[183529]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 09 10:38:42 compute-0 useradd[183531]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Dec 09 10:38:42 compute-0 useradd[183531]: add 'nova' to group 'libvirt'
Dec 09 10:38:42 compute-0 useradd[183531]: add 'nova' to shadow group 'libvirt'
Dec 09 10:38:42 compute-0 sudo[183527]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:42 compute-0 sshd-session[183562]: Accepted publickey for zuul from 192.168.122.30 port 45248 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:38:42 compute-0 systemd-logind[806]: New session 25 of user zuul.
Dec 09 10:38:42 compute-0 systemd[1]: Started Session 25 of User zuul.
Dec 09 10:38:42 compute-0 sshd-session[183562]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:38:43 compute-0 sshd-session[183565]: Received disconnect from 192.168.122.30 port 45248:11: disconnected by user
Dec 09 10:38:43 compute-0 sshd-session[183565]: Disconnected from user zuul 192.168.122.30 port 45248
Dec 09 10:38:43 compute-0 sshd-session[183562]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:38:43 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Dec 09 10:38:43 compute-0 systemd-logind[806]: Session 25 logged out. Waiting for processes to exit.
Dec 09 10:38:43 compute-0 systemd-logind[806]: Removed session 25.
Dec 09 10:38:44 compute-0 python3.9[183715]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:38:44 compute-0 python3.9[183836]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276723.6994681-1229-270859269794157/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:45 compute-0 python3.9[183986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:38:45 compute-0 python3.9[184062]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:46 compute-0 python3.9[184212]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:38:46 compute-0 python3.9[184333]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276725.9046667-1229-192809878384502/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:47 compute-0 python3.9[184483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:38:48 compute-0 sshd[131060]: Timeout before authentication for connection from 27.148.182.148 to 38.102.83.201, pid = 163872
Dec 09 10:38:48 compute-0 python3.9[184604]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276727.1456003-1229-8382655651154/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:49 compute-0 python3.9[184754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:38:49 compute-0 python3.9[184875]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276728.5009077-1229-37444260984569/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:50 compute-0 python3.9[185025]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:38:50 compute-0 python3.9[185146]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276729.7333763-1229-162208212927259/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:51 compute-0 sudo[185296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gguhwspgjbiuuqgkrdjfyvpmpozdtmlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276730.9487746-1312-81596983163806/AnsiballZ_file.py'
Dec 09 10:38:51 compute-0 sudo[185296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:51 compute-0 python3.9[185298]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:51 compute-0 sudo[185296]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:52 compute-0 sudo[185458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrdqjbrxhptmwftjdngqctldmmovgris ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276731.8149507-1320-124500984474953/AnsiballZ_copy.py'
Dec 09 10:38:52 compute-0 sudo[185458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:52 compute-0 podman[185422]: 2025-12-09 10:38:52.142384751 +0000 UTC m=+0.070016259 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 09 10:38:52 compute-0 python3.9[185468]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:38:52 compute-0 sudo[185458]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:52 compute-0 sudo[185620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivwqfqmvjfcdhjqcqnvbjoyjxwuyfjul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276732.5013847-1328-125005018330542/AnsiballZ_stat.py'
Dec 09 10:38:52 compute-0 sudo[185620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:53 compute-0 python3.9[185622]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:38:53 compute-0 sudo[185620]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:53 compute-0 sudo[185772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djnluprvlgpuvacjsecooyhaxqlzxvsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276733.2309291-1336-239197011667705/AnsiballZ_stat.py'
Dec 09 10:38:53 compute-0 sudo[185772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:53 compute-0 python3.9[185774]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:38:53 compute-0 sudo[185772]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:54 compute-0 sudo[185895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfopplrnwpbyhohdstpjnfgqspsuclba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276733.2309291-1336-239197011667705/AnsiballZ_copy.py'
Dec 09 10:38:54 compute-0 sudo[185895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:54 compute-0 python3.9[185897]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1765276733.2309291-1336-239197011667705/.source _original_basename=.c_wcksdd follow=False checksum=bbff628a6ea4f12994b66e810982d63a67a943ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec 09 10:38:54 compute-0 sudo[185895]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:55 compute-0 python3.9[186049]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:38:55 compute-0 python3.9[186201]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:38:56 compute-0 python3.9[186322]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276735.299076-1362-126731258462830/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:57 compute-0 python3.9[186472]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:38:57 compute-0 python3.9[186593]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276736.7302845-1377-156590338410160/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:38:58 compute-0 sudo[186743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dshmcgvjmvvfmvlwpnuzeteahorcnpmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276738.2034938-1394-150993975928624/AnsiballZ_container_config_data.py'
Dec 09 10:38:58 compute-0 sudo[186743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:58 compute-0 python3.9[186745]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 09 10:38:58 compute-0 sudo[186743]: pam_unix(sudo:session): session closed for user root
Dec 09 10:38:59 compute-0 sudo[186914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjdwqvrfgzyzyncudbovwjyagjqbnqnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276738.9700353-1403-87334554239592/AnsiballZ_container_config_hash.py'
Dec 09 10:38:59 compute-0 sudo[186914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:38:59 compute-0 podman[186869]: 2025-12-09 10:38:59.304880681 +0000 UTC m=+0.090406565 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 09 10:38:59 compute-0 python3.9[186920]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 09 10:38:59 compute-0 sudo[186914]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:00 compute-0 sudo[187074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwndbyjpliyuokqrjahcaqjhduktvzfb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765276739.8027136-1413-76701950467406/AnsiballZ_edpm_container_manage.py'
Dec 09 10:39:00 compute-0 sudo[187074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:00 compute-0 python3[187076]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 09 10:39:00 compute-0 podman[187111]: 2025-12-09 10:39:00.631150221 +0000 UTC m=+0.074252267 container create 73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, io.buildah.version=1.41.3)
Dec 09 10:39:00 compute-0 podman[187111]: 2025-12-09 10:39:00.595286373 +0000 UTC m=+0.038388499 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 09 10:39:00 compute-0 python3[187076]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 09 10:39:00 compute-0 sudo[187074]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:01 compute-0 sudo[187299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpftehrzbnsfwmanfymuannnawwfthgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276741.0219219-1421-228701756519167/AnsiballZ_stat.py'
Dec 09 10:39:01 compute-0 sudo[187299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:01 compute-0 python3.9[187301]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:39:01 compute-0 sudo[187299]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:02 compute-0 sudo[187453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbgmlmxqbbengpnallkfzdmmciwytwtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276741.9599166-1433-267826618628503/AnsiballZ_container_config_data.py'
Dec 09 10:39:02 compute-0 sudo[187453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:02 compute-0 python3.9[187455]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 09 10:39:02 compute-0 sudo[187453]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:03 compute-0 sudo[187605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekcejlahehigunpzlbbonmrhryglckbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276742.794496-1442-5635338443708/AnsiballZ_container_config_hash.py'
Dec 09 10:39:03 compute-0 sudo[187605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:03 compute-0 python3.9[187607]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 09 10:39:03 compute-0 sudo[187605]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:04 compute-0 sudo[187757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caozwpatpqoyitdavgqosuwbbyzjatwa ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765276743.6835072-1452-49393812197206/AnsiballZ_edpm_container_manage.py'
Dec 09 10:39:04 compute-0 sudo[187757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:04 compute-0 python3[187759]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 09 10:39:04 compute-0 podman[187797]: 2025-12-09 10:39:04.614880566 +0000 UTC m=+0.077641860 container create 2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:39:04 compute-0 podman[187797]: 2025-12-09 10:39:04.577873817 +0000 UTC m=+0.040635201 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 09 10:39:04 compute-0 python3[187759]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec 09 10:39:04 compute-0 sudo[187757]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:05 compute-0 sudo[187987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fohzvbdmpnbpgoqjuvmsnfhjnrlcrsbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276745.0082822-1460-254527158276021/AnsiballZ_stat.py'
Dec 09 10:39:05 compute-0 sudo[187987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:05 compute-0 python3.9[187989]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:39:05 compute-0 sudo[187987]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:05 compute-0 sshd-session[187914]: Connection closed by authenticating user daemon 159.223.8.217 port 38314 [preauth]
Dec 09 10:39:06 compute-0 sudo[188141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyrilyqtadqvmwitpduafrlmywaqnqyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276746.0549467-1469-188351523015519/AnsiballZ_file.py'
Dec 09 10:39:06 compute-0 sudo[188141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:06 compute-0 python3.9[188143]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:39:06 compute-0 sudo[188141]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:07 compute-0 sudo[188292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdhoplozmretecwzabxwspsirwkhoawy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276746.7124631-1469-271417735294907/AnsiballZ_copy.py'
Dec 09 10:39:07 compute-0 sudo[188292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:07 compute-0 python3.9[188294]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276746.7124631-1469-271417735294907/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:39:07 compute-0 sudo[188292]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:07 compute-0 sudo[188368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxfnfzokjinnjvlaqwqafsmzyvcxnult ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276746.7124631-1469-271417735294907/AnsiballZ_systemd.py'
Dec 09 10:39:07 compute-0 sudo[188368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:07 compute-0 python3.9[188370]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:39:07 compute-0 systemd[1]: Reloading.
Dec 09 10:39:07 compute-0 systemd-rc-local-generator[188392]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:39:07 compute-0 systemd-sysv-generator[188396]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:39:08 compute-0 sudo[188368]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:08 compute-0 sudo[188480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbfooaelcukuqamytgkmxkpdijeeklng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276746.7124631-1469-271417735294907/AnsiballZ_systemd.py'
Dec 09 10:39:08 compute-0 sudo[188480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:08 compute-0 python3.9[188482]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:39:08 compute-0 systemd[1]: Reloading.
Dec 09 10:39:08 compute-0 systemd-rc-local-generator[188510]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:39:08 compute-0 systemd-sysv-generator[188514]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:39:09 compute-0 systemd[1]: Starting nova_compute container...
Dec 09 10:39:09 compute-0 podman[188520]: 2025-12-09 10:39:09.234482359 +0000 UTC m=+0.087462233 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:39:09 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 09 10:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 09 10:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 09 10:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 09 10:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 09 10:39:09 compute-0 podman[188522]: 2025-12-09 10:39:09.268281059 +0000 UTC m=+0.114906417 container init 2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:39:09 compute-0 podman[188522]: 2025-12-09 10:39:09.273542586 +0000 UTC m=+0.120167924 container start 2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute)
Dec 09 10:39:09 compute-0 nova_compute[188553]: + sudo -E kolla_set_configs
Dec 09 10:39:09 compute-0 podman[188522]: nova_compute
Dec 09 10:39:09 compute-0 systemd[1]: Started nova_compute container.
Dec 09 10:39:09 compute-0 sudo[188480]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Validating config file
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Copying service configuration files
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Deleting /etc/ceph
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Creating directory /etc/ceph
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Setting permission for /etc/ceph
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Writing out command to execute
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 09 10:39:09 compute-0 nova_compute[188553]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 09 10:39:09 compute-0 nova_compute[188553]: ++ cat /run_command
Dec 09 10:39:09 compute-0 nova_compute[188553]: + CMD=nova-compute
Dec 09 10:39:09 compute-0 nova_compute[188553]: + ARGS=
Dec 09 10:39:09 compute-0 nova_compute[188553]: + sudo kolla_copy_cacerts
Dec 09 10:39:09 compute-0 nova_compute[188553]: + [[ ! -n '' ]]
Dec 09 10:39:09 compute-0 nova_compute[188553]: + . kolla_extend_start
Dec 09 10:39:09 compute-0 nova_compute[188553]: Running command: 'nova-compute'
Dec 09 10:39:09 compute-0 nova_compute[188553]: + echo 'Running command: '\''nova-compute'\'''
Dec 09 10:39:09 compute-0 nova_compute[188553]: + umask 0022
Dec 09 10:39:09 compute-0 nova_compute[188553]: + exec nova-compute
Dec 09 10:39:10 compute-0 python3.9[188716]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:39:11 compute-0 python3.9[188866]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:39:11 compute-0 nova_compute[188553]: 2025-12-09 10:39:11.358 188558 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 09 10:39:11 compute-0 nova_compute[188553]: 2025-12-09 10:39:11.358 188558 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 09 10:39:11 compute-0 nova_compute[188553]: 2025-12-09 10:39:11.358 188558 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 09 10:39:11 compute-0 nova_compute[188553]: 2025-12-09 10:39:11.358 188558 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 09 10:39:11 compute-0 nova_compute[188553]: 2025-12-09 10:39:11.494 188558 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:39:11 compute-0 nova_compute[188553]: 2025-12-09 10:39:11.526 188558 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:39:11 compute-0 nova_compute[188553]: 2025-12-09 10:39:11.526 188558 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 09 10:39:11 compute-0 python3.9[189020]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.182 188558 INFO nova.virt.driver [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.317 188558 INFO nova.compute.provider_config [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.337 188558 DEBUG oslo_concurrency.lockutils [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.337 188558 DEBUG oslo_concurrency.lockutils [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.338 188558 DEBUG oslo_concurrency.lockutils [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.338 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.338 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.338 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.338 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.338 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.339 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.339 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.339 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.339 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.339 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.339 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.340 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.340 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.340 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.340 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.340 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.340 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.340 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.340 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.341 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.341 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.341 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.341 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.341 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.341 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.342 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.342 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.342 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.342 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.342 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.342 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.342 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.343 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.343 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.343 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.343 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.343 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.343 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.343 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.344 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.344 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.344 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.344 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.344 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.344 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.345 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.345 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.345 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.345 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.345 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.345 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.345 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.346 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.346 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.346 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.346 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.346 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.346 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.346 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.346 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.347 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.347 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.347 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.347 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.347 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.347 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.347 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.348 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.348 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.348 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.348 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.348 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.348 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.348 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.349 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.349 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.349 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.349 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.349 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.349 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.349 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.350 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.350 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.350 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.350 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.350 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.350 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.350 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.351 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.351 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.351 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.351 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.351 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.352 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.352 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.352 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.352 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.352 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.352 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.353 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.353 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.353 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.353 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.353 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.353 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.353 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.354 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.354 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.354 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.354 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.354 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.354 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.354 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.354 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.355 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.355 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.355 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.355 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.355 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.355 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.355 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.356 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.356 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.356 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.356 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.356 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.356 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.356 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.357 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.357 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.357 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.357 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.357 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.357 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.357 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.358 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.358 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.358 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.358 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.358 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.358 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.358 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.359 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.359 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.359 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.359 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.359 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.359 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.359 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.360 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.360 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.360 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.360 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.360 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.360 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.360 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.361 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.361 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.361 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.361 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.361 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.361 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.361 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.362 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.362 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.362 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.362 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.362 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.362 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.362 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.363 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.363 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.363 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.363 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.363 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.363 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.364 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.364 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.364 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.364 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.364 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.364 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.364 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.365 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.365 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.365 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.365 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.365 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.365 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.365 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.366 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.366 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.366 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.366 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.366 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.366 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.366 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.367 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.367 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.367 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.367 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.367 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.367 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.367 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.368 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.368 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.368 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.368 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.368 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.368 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.368 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.369 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.369 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.369 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.369 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.369 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.369 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.369 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.370 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.370 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.370 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.370 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.370 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.370 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.370 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.371 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.371 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.371 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.371 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.371 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.371 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.371 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.372 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.372 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.372 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.372 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.372 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.372 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.372 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.373 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.373 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.373 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.373 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.373 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.373 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.373 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.374 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.374 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.374 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.374 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.374 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.374 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.374 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.375 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.375 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.375 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.375 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.375 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.375 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.375 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.376 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.376 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.376 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.376 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.376 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.376 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.376 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.377 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.377 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.377 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.377 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.377 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.377 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.377 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.378 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.378 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.378 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.378 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.378 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.378 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.378 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.378 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.379 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.379 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.379 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.379 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.379 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.379 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.379 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.380 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.380 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.380 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.380 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.380 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.380 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.380 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.381 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.381 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.381 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.381 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.381 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.382 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.382 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.382 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.382 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.382 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.382 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.382 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.383 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.383 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.383 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.383 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.383 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.383 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.383 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.384 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.384 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.384 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.384 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.384 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.384 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.385 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.385 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.385 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.385 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.385 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.385 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.386 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.386 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.386 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.386 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.386 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.386 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.386 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.387 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.387 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.387 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.387 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.387 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.387 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.387 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.388 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.388 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.388 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.388 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.388 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.388 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.388 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.389 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.389 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.389 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.389 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.389 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.389 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.389 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.390 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.390 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.390 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.390 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.390 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.391 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.391 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.391 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.391 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.391 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.391 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.391 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.392 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.392 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.392 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.392 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.392 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.392 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.392 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.393 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.393 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.393 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.393 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.393 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.393 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.393 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.394 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.394 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.394 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.394 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.394 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.394 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.395 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.395 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.395 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.395 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.395 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.395 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.395 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.396 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.396 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.396 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.396 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.396 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.396 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.397 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.397 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.397 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.397 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.397 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.397 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.397 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.398 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.398 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.398 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.398 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.398 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.398 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.398 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.399 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.399 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.399 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.399 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.399 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.399 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.400 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.400 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.400 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.400 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.400 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.400 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.400 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.401 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.401 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.401 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.401 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.401 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.401 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.401 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.402 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.402 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.402 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.402 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.402 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.402 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.402 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.403 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.403 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.403 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.403 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.403 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.403 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.403 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.404 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.404 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.404 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.404 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.404 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.404 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.405 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.405 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.405 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.405 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.405 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.405 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.405 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.405 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.406 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.406 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.406 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.406 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.406 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.406 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.407 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.407 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.407 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.407 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.407 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.407 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.407 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.408 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.408 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.408 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.408 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.408 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.408 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.408 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.409 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.409 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.409 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.409 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.409 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.409 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.409 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.410 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.410 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.410 188558 WARNING oslo_config.cfg [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 09 10:39:12 compute-0 nova_compute[188553]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 09 10:39:12 compute-0 nova_compute[188553]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 09 10:39:12 compute-0 nova_compute[188553]: and ``live_migration_inbound_addr`` respectively.
Dec 09 10:39:12 compute-0 nova_compute[188553]: ).  Its value may be silently ignored in the future.
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.410 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.410 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.410 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.411 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.411 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.411 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.411 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.411 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.411 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.411 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.412 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.412 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.412 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.412 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.412 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.412 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.413 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.413 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.413 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.413 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.413 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.413 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.413 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.414 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.414 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.414 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.414 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.414 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.414 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.414 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.415 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.415 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.415 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.415 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.415 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.415 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.415 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.416 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.416 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.416 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.416 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.416 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.416 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.416 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.417 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.417 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.417 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.417 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.417 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.417 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.417 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.418 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.418 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.418 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.418 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.418 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.418 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.418 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.419 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.419 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.419 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.419 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.419 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.419 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.419 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.420 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.420 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.420 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.420 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.420 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.420 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.420 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.421 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.421 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.421 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.421 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.421 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.421 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.421 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.422 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.422 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.422 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.422 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.422 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.422 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.423 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.423 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.423 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.423 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.423 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.423 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.423 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.424 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.424 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.424 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.424 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.424 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.424 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.424 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.425 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.425 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.425 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.425 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.425 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.425 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.425 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.426 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.426 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.426 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.426 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.426 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.426 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.426 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.427 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.427 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.427 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.427 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.427 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.427 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.427 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.428 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.428 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.428 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.428 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.428 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.428 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.428 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.429 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.429 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.429 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.429 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.429 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.429 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.429 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.430 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.430 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.430 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.430 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.430 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.430 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.430 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.431 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.431 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.431 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.431 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.431 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.431 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.432 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.432 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.432 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.432 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.432 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.432 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.433 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.433 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.433 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.433 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.433 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.433 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.433 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.434 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.434 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.434 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.434 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.434 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.434 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.434 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.434 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.435 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.435 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.435 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.435 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.435 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.435 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.435 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.436 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.436 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.436 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.436 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.436 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.436 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.437 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.437 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.437 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.437 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.437 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.437 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.437 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.438 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.438 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.438 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.438 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.438 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.438 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.438 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.439 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.439 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.439 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.439 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.439 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.439 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.440 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.440 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.440 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.440 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.440 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.440 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.440 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.441 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.441 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.441 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.441 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.441 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.441 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.441 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.442 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.442 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.442 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.442 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.442 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.442 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.442 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.443 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.443 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.443 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.443 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.443 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.443 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.443 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.444 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.444 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.444 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.444 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.444 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.444 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.444 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.444 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.445 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.445 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.445 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.445 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.445 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.445 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.445 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.446 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.446 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.446 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.446 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.446 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.446 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.447 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.447 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.447 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.447 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.447 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.447 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.447 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.448 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.448 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.448 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.448 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.448 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.448 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.448 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.449 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.449 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.449 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.449 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.449 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.449 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.449 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.450 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.450 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.450 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.450 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.450 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.450 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.450 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.451 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.451 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.451 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.451 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.451 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.451 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.451 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.451 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.452 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.452 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.452 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.452 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.452 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.452 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.453 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.453 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.453 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.453 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.453 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.453 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.453 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.454 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.454 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.454 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.454 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.454 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.454 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.454 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.455 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.455 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.455 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.455 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.455 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.455 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.455 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.456 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.456 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.456 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.456 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.456 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.456 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.456 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.457 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.457 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.457 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.457 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.457 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.457 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.458 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.458 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.458 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.458 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.458 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.458 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.458 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.459 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.459 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.459 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.459 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.459 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.459 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.459 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.460 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.460 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.460 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.460 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.460 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.460 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.461 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.461 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.461 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.461 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.461 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.461 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.461 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.461 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.462 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.462 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.462 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.462 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.462 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.462 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.462 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.463 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.463 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.463 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.463 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.463 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.463 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.463 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.464 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.464 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.464 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.464 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.464 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.464 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.464 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.465 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.465 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.465 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.465 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.465 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.465 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.465 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.466 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.466 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.466 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.466 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.466 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.466 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.466 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.466 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.467 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.467 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.467 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.467 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.467 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.467 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.467 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.468 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.468 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.468 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.468 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.468 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.468 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.468 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.469 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.469 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.469 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.469 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.469 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.469 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.469 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.470 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.470 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.470 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.470 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.470 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.470 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.470 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.471 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.471 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.471 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.471 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.471 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.471 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.472 188558 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.488 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.488 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.489 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.489 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 09 10:39:12 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 09 10:39:12 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.594 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f0f91a287f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.599 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f0f91a287f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.601 188558 INFO nova.virt.libvirt.driver [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Connection event '1' reason 'None'
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.623 188558 WARNING nova.virt.libvirt.driver [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 09 10:39:12 compute-0 nova_compute[188553]: 2025-12-09 10:39:12.624 188558 DEBUG nova.virt.libvirt.volume.mount [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 09 10:39:12 compute-0 sudo[189222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yynujvfhulnmrwiftnyzglexdvhsebsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276752.2146657-1529-195785582867338/AnsiballZ_podman_container.py'
Dec 09 10:39:12 compute-0 sudo[189222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:13 compute-0 python3.9[189224]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 09 10:39:13 compute-0 sudo[189222]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:13 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 10:39:13 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.573 188558 INFO nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Libvirt host capabilities <capabilities>
Dec 09 10:39:13 compute-0 nova_compute[188553]: 
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <host>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <uuid>6aaf5123-0bdb-461d-92bb-b40c4bea282b</uuid>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <cpu>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <arch>x86_64</arch>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model>EPYC-Rome-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <vendor>AMD</vendor>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <microcode version='16777317'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <signature family='23' model='49' stepping='0'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='x2apic'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='tsc-deadline'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='osxsave'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='hypervisor'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='tsc_adjust'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='spec-ctrl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='stibp'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='arch-capabilities'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='ssbd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='cmp_legacy'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='topoext'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='virt-ssbd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='lbrv'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='tsc-scale'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='vmcb-clean'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='pause-filter'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='pfthreshold'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='svme-addr-chk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='rdctl-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='skip-l1dfl-vmentry'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='mds-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature name='pschange-mc-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <pages unit='KiB' size='4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <pages unit='KiB' size='2048'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <pages unit='KiB' size='1048576'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </cpu>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <power_management>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <suspend_mem/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <suspend_disk/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <suspend_hybrid/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </power_management>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <iommu support='no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <migration_features>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <live/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <uri_transports>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <uri_transport>tcp</uri_transport>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <uri_transport>rdma</uri_transport>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </uri_transports>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </migration_features>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <topology>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <cells num='1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <cell id='0'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:           <memory unit='KiB'>7864304</memory>
Dec 09 10:39:13 compute-0 nova_compute[188553]:           <pages unit='KiB' size='4'>1966076</pages>
Dec 09 10:39:13 compute-0 nova_compute[188553]:           <pages unit='KiB' size='2048'>0</pages>
Dec 09 10:39:13 compute-0 nova_compute[188553]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 09 10:39:13 compute-0 nova_compute[188553]:           <distances>
Dec 09 10:39:13 compute-0 nova_compute[188553]:             <sibling id='0' value='10'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:           </distances>
Dec 09 10:39:13 compute-0 nova_compute[188553]:           <cpus num='8'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:           </cpus>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         </cell>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </cells>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </topology>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <cache>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </cache>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <secmodel>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model>selinux</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <doi>0</doi>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </secmodel>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <secmodel>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model>dac</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <doi>0</doi>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </secmodel>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </host>
Dec 09 10:39:13 compute-0 nova_compute[188553]: 
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <guest>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <os_type>hvm</os_type>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <arch name='i686'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <wordsize>32</wordsize>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <domain type='qemu'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <domain type='kvm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </arch>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <features>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <pae/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <nonpae/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <acpi default='on' toggle='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <apic default='on' toggle='no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <cpuselection/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <deviceboot/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <disksnapshot default='on' toggle='no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <externalSnapshot/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </features>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </guest>
Dec 09 10:39:13 compute-0 nova_compute[188553]: 
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <guest>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <os_type>hvm</os_type>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <arch name='x86_64'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <wordsize>64</wordsize>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <domain type='qemu'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <domain type='kvm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </arch>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <features>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <acpi default='on' toggle='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <apic default='on' toggle='no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <cpuselection/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <deviceboot/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <disksnapshot default='on' toggle='no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <externalSnapshot/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </features>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </guest>
Dec 09 10:39:13 compute-0 nova_compute[188553]: 
Dec 09 10:39:13 compute-0 nova_compute[188553]: </capabilities>
Dec 09 10:39:13 compute-0 nova_compute[188553]: 
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.583 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.603 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 09 10:39:13 compute-0 nova_compute[188553]: <domainCapabilities>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <path>/usr/libexec/qemu-kvm</path>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <domain>kvm</domain>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <arch>i686</arch>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <vcpu max='4096'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <iothreads supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <os supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <enum name='firmware'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <loader supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>rom</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pflash</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='readonly'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>yes</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>no</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='secure'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>no</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </loader>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </os>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <cpu>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <mode name='host-passthrough' supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='hostPassthroughMigratable'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>on</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>off</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </mode>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <mode name='maximum' supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='maximumMigratable'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>on</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>off</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </mode>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <mode name='host-model' supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <vendor>AMD</vendor>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='x2apic'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='tsc-deadline'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='hypervisor'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='tsc_adjust'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='spec-ctrl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='stibp'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='ssbd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='cmp_legacy'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='overflow-recov'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='succor'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='ibrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='amd-ssbd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='virt-ssbd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='lbrv'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='tsc-scale'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='vmcb-clean'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='flushbyasid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='pause-filter'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='pfthreshold'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='svme-addr-chk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='disable' name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </mode>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <mode name='custom' supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-noTSX'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v5'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cooperlake'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cooperlake-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cooperlake-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Denverton'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mpx'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Denverton-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mpx'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Denverton-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Denverton-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Dhyana-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Genoa'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amd-psfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='auto-ibrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='stibp-always-on'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Genoa-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amd-psfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='auto-ibrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='stibp-always-on'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Milan'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Milan-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Milan-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amd-psfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='stibp-always-on'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Rome'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Rome-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Rome-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Rome-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='GraniteRapids'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='prefetchiti'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='GraniteRapids-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='prefetchiti'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='GraniteRapids-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx10'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx10-128'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx10-256'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx10-512'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='prefetchiti'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-noTSX'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-noTSX'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v5'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v6'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v7'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='IvyBridge'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='IvyBridge-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='IvyBridge-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='IvyBridge-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='KnightsMill'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-4fmaps'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-4vnniw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512er'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512pf'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='KnightsMill-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-4fmaps'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-4vnniw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512er'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512pf'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Opteron_G4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fma4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xop'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Opteron_G4-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fma4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xop'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Opteron_G5'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fma4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tbm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xop'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Opteron_G5-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fma4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tbm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xop'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SapphireRapids'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SapphireRapids-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SapphireRapids-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SapphireRapids-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SierraForest'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-ne-convert'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cmpccxadd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SierraForest-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-ne-convert'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cmpccxadd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v5'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='core-capability'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mpx'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='split-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='core-capability'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mpx'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='split-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='core-capability'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='split-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='core-capability'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='split-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='athlon'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnow'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnowext'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='athlon-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnow'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnowext'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='core2duo'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='core2duo-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='coreduo'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='coreduo-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='n270'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='n270-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='phenom'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnow'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnowext'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='phenom-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnow'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnowext'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </mode>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </cpu>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <memoryBacking supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <enum name='sourceType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>file</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>anonymous</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>memfd</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </memoryBacking>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <devices>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <disk supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='diskDevice'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>disk</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>cdrom</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>floppy</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>lun</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='bus'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>fdc</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>scsi</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>usb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>sata</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio-transitional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio-non-transitional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </disk>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <graphics supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vnc</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>egl-headless</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>dbus</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </graphics>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <video supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='modelType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vga</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>cirrus</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>none</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>bochs</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>ramfb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </video>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <hostdev supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='mode'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>subsystem</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='startupPolicy'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>default</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>mandatory</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>requisite</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>optional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='subsysType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>usb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pci</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>scsi</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='capsType'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='pciBackend'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </hostdev>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <rng supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio-transitional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio-non-transitional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendModel'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>random</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>egd</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>builtin</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </rng>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <filesystem supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='driverType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>path</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>handle</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtiofs</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </filesystem>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <tpm supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tpm-tis</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tpm-crb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendModel'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>emulator</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>external</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendVersion'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>2.0</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </tpm>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <redirdev supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='bus'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>usb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </redirdev>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <channel supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pty</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>unix</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </channel>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <crypto supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>qemu</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendModel'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>builtin</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </crypto>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <interface supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>default</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>passt</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </interface>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <panic supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>isa</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>hyperv</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </panic>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <console supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>null</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vc</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pty</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>dev</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>file</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pipe</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>stdio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>udp</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tcp</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>unix</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>qemu-vdagent</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>dbus</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </console>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </devices>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <features>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <gic supported='no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <vmcoreinfo supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <genid supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <backingStoreInput supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <backup supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <async-teardown supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <ps2 supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <sev supported='no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <sgx supported='no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <hyperv supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='features'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>relaxed</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vapic</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>spinlocks</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vpindex</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>runtime</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>synic</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>stimer</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>reset</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vendor_id</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>frequencies</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>reenlightenment</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tlbflush</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>ipi</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>avic</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>emsr_bitmap</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>xmm_input</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <defaults>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <spinlocks>4095</spinlocks>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <stimer_direct>on</stimer_direct>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <tlbflush_direct>on</tlbflush_direct>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <tlbflush_extended>on</tlbflush_extended>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </defaults>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </hyperv>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <launchSecurity supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='sectype'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tdx</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </launchSecurity>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </features>
Dec 09 10:39:13 compute-0 nova_compute[188553]: </domainCapabilities>
Dec 09 10:39:13 compute-0 nova_compute[188553]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.612 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 09 10:39:13 compute-0 nova_compute[188553]: <domainCapabilities>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <path>/usr/libexec/qemu-kvm</path>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <domain>kvm</domain>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <arch>i686</arch>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <vcpu max='240'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <iothreads supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <os supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <enum name='firmware'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <loader supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>rom</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pflash</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='readonly'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>yes</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>no</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='secure'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>no</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </loader>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </os>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <cpu>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <mode name='host-passthrough' supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='hostPassthroughMigratable'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>on</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>off</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </mode>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <mode name='maximum' supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='maximumMigratable'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>on</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>off</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </mode>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <mode name='host-model' supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <vendor>AMD</vendor>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='x2apic'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='tsc-deadline'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='hypervisor'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='tsc_adjust'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='spec-ctrl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='stibp'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='ssbd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='cmp_legacy'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='overflow-recov'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='succor'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='ibrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='amd-ssbd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='virt-ssbd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='lbrv'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='tsc-scale'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='vmcb-clean'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='flushbyasid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='pause-filter'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='pfthreshold'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='svme-addr-chk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='disable' name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </mode>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <mode name='custom' supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-noTSX'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v5'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cooperlake'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cooperlake-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cooperlake-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Denverton'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mpx'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Denverton-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mpx'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Denverton-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Denverton-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Dhyana-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Genoa'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amd-psfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='auto-ibrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='stibp-always-on'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Genoa-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amd-psfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='auto-ibrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='stibp-always-on'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Milan'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Milan-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Milan-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amd-psfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='stibp-always-on'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Rome'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Rome-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Rome-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Rome-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='GraniteRapids'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='prefetchiti'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='GraniteRapids-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='prefetchiti'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='GraniteRapids-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx10'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx10-128'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx10-256'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx10-512'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='prefetchiti'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-noTSX'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-noTSX'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v5'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v6'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v7'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='IvyBridge'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='IvyBridge-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='IvyBridge-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='IvyBridge-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='KnightsMill'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-4fmaps'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-4vnniw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512er'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512pf'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='KnightsMill-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-4fmaps'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-4vnniw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512er'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512pf'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Opteron_G4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fma4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xop'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Opteron_G4-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fma4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xop'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Opteron_G5'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fma4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tbm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xop'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Opteron_G5-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fma4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tbm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xop'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SapphireRapids'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SapphireRapids-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SapphireRapids-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SapphireRapids-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SierraForest'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-ne-convert'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cmpccxadd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SierraForest-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-ne-convert'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cmpccxadd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v5'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='core-capability'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mpx'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='split-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='core-capability'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mpx'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='split-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='core-capability'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='split-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='core-capability'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='split-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='athlon'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnow'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnowext'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='athlon-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnow'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnowext'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='core2duo'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='core2duo-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='coreduo'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='coreduo-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='n270'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='n270-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='phenom'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnow'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnowext'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='phenom-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnow'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnowext'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </mode>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </cpu>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <memoryBacking supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <enum name='sourceType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>file</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>anonymous</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>memfd</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </memoryBacking>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <devices>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <disk supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='diskDevice'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>disk</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>cdrom</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>floppy</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>lun</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='bus'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>ide</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>fdc</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>scsi</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>usb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>sata</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio-transitional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio-non-transitional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </disk>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <graphics supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vnc</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>egl-headless</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>dbus</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </graphics>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <video supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='modelType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vga</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>cirrus</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>none</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>bochs</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>ramfb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </video>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <hostdev supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='mode'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>subsystem</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='startupPolicy'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>default</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>mandatory</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>requisite</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>optional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='subsysType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>usb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pci</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>scsi</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='capsType'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='pciBackend'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </hostdev>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <rng supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio-transitional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio-non-transitional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendModel'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>random</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>egd</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>builtin</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </rng>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <filesystem supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='driverType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>path</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>handle</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtiofs</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </filesystem>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <tpm supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tpm-tis</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tpm-crb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendModel'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>emulator</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>external</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendVersion'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>2.0</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </tpm>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <redirdev supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='bus'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>usb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </redirdev>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <channel supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pty</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>unix</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </channel>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <crypto supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>qemu</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendModel'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>builtin</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </crypto>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <interface supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>default</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>passt</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </interface>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <panic supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>isa</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>hyperv</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </panic>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <console supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>null</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vc</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pty</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>dev</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>file</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pipe</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>stdio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>udp</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tcp</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>unix</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>qemu-vdagent</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>dbus</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </console>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </devices>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <features>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <gic supported='no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <vmcoreinfo supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <genid supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <backingStoreInput supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <backup supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <async-teardown supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <ps2 supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <sev supported='no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <sgx supported='no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <hyperv supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='features'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>relaxed</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vapic</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>spinlocks</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vpindex</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>runtime</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>synic</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>stimer</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>reset</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vendor_id</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>frequencies</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>reenlightenment</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tlbflush</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>ipi</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>avic</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>emsr_bitmap</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>xmm_input</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <defaults>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <spinlocks>4095</spinlocks>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <stimer_direct>on</stimer_direct>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <tlbflush_direct>on</tlbflush_direct>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <tlbflush_extended>on</tlbflush_extended>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </defaults>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </hyperv>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <launchSecurity supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='sectype'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tdx</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </launchSecurity>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </features>
Dec 09 10:39:13 compute-0 nova_compute[188553]: </domainCapabilities>
Dec 09 10:39:13 compute-0 nova_compute[188553]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.636 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.641 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 09 10:39:13 compute-0 nova_compute[188553]: <domainCapabilities>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <path>/usr/libexec/qemu-kvm</path>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <domain>kvm</domain>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <arch>x86_64</arch>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <vcpu max='4096'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <iothreads supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <os supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <enum name='firmware'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>efi</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <loader supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>rom</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pflash</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='readonly'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>yes</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>no</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='secure'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>yes</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>no</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </loader>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </os>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <cpu>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <mode name='host-passthrough' supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='hostPassthroughMigratable'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>on</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>off</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </mode>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <mode name='maximum' supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='maximumMigratable'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>on</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>off</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </mode>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <mode name='host-model' supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <vendor>AMD</vendor>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='x2apic'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='tsc-deadline'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='hypervisor'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='tsc_adjust'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='spec-ctrl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='stibp'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='ssbd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='cmp_legacy'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='overflow-recov'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='succor'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='ibrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='amd-ssbd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='virt-ssbd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='lbrv'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='tsc-scale'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='vmcb-clean'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='flushbyasid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='pause-filter'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='pfthreshold'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='svme-addr-chk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='disable' name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </mode>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <mode name='custom' supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-noTSX'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v5'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cooperlake'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cooperlake-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cooperlake-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Denverton'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mpx'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Denverton-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mpx'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Denverton-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Denverton-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Dhyana-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Genoa'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amd-psfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='auto-ibrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='stibp-always-on'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Genoa-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amd-psfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='auto-ibrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='stibp-always-on'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Milan'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Milan-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Milan-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amd-psfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='stibp-always-on'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Rome'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Rome-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Rome-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Rome-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='GraniteRapids'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='prefetchiti'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='GraniteRapids-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='prefetchiti'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='GraniteRapids-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx10'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx10-128'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx10-256'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx10-512'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='prefetchiti'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-noTSX'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-noTSX'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v5'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v6'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v7'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='IvyBridge'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='IvyBridge-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='IvyBridge-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='IvyBridge-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='KnightsMill'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-4fmaps'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-4vnniw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512er'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512pf'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='KnightsMill-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-4fmaps'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-4vnniw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512er'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512pf'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Opteron_G4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fma4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xop'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Opteron_G4-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fma4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xop'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Opteron_G5'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fma4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tbm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xop'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Opteron_G5-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fma4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tbm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xop'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SapphireRapids'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SapphireRapids-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SapphireRapids-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SapphireRapids-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SierraForest'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-ne-convert'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cmpccxadd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SierraForest-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-ne-convert'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cmpccxadd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v5'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='core-capability'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mpx'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='split-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='core-capability'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mpx'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='split-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='core-capability'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='split-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='core-capability'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='split-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='athlon'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnow'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnowext'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='athlon-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnow'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnowext'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='core2duo'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='core2duo-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='coreduo'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='coreduo-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='n270'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='n270-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='phenom'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnow'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnowext'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='phenom-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnow'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnowext'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </mode>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </cpu>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <memoryBacking supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <enum name='sourceType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>file</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>anonymous</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>memfd</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </memoryBacking>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <devices>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <disk supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='diskDevice'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>disk</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>cdrom</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>floppy</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>lun</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='bus'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>fdc</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>scsi</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>usb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>sata</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio-transitional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio-non-transitional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </disk>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <graphics supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vnc</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>egl-headless</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>dbus</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </graphics>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <video supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='modelType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vga</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>cirrus</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>none</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>bochs</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>ramfb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </video>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <hostdev supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='mode'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>subsystem</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='startupPolicy'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>default</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>mandatory</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>requisite</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>optional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='subsysType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>usb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pci</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>scsi</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='capsType'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='pciBackend'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </hostdev>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <rng supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio-transitional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio-non-transitional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendModel'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>random</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>egd</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>builtin</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </rng>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <filesystem supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='driverType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>path</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>handle</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtiofs</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </filesystem>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <tpm supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tpm-tis</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tpm-crb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendModel'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>emulator</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>external</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendVersion'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>2.0</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </tpm>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <redirdev supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='bus'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>usb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </redirdev>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <channel supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pty</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>unix</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </channel>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <crypto supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>qemu</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendModel'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>builtin</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </crypto>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <interface supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>default</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>passt</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </interface>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <panic supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>isa</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>hyperv</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </panic>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <console supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>null</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vc</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pty</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>dev</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>file</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pipe</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>stdio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>udp</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tcp</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>unix</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>qemu-vdagent</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>dbus</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </console>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </devices>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <features>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <gic supported='no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <vmcoreinfo supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <genid supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <backingStoreInput supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <backup supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <async-teardown supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <ps2 supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <sev supported='no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <sgx supported='no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <hyperv supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='features'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>relaxed</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vapic</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>spinlocks</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vpindex</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>runtime</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>synic</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>stimer</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>reset</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vendor_id</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>frequencies</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>reenlightenment</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tlbflush</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>ipi</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>avic</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>emsr_bitmap</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>xmm_input</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <defaults>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <spinlocks>4095</spinlocks>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <stimer_direct>on</stimer_direct>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <tlbflush_direct>on</tlbflush_direct>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <tlbflush_extended>on</tlbflush_extended>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </defaults>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </hyperv>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <launchSecurity supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='sectype'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tdx</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </launchSecurity>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </features>
Dec 09 10:39:13 compute-0 nova_compute[188553]: </domainCapabilities>
Dec 09 10:39:13 compute-0 nova_compute[188553]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.702 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 09 10:39:13 compute-0 nova_compute[188553]: <domainCapabilities>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <path>/usr/libexec/qemu-kvm</path>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <domain>kvm</domain>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <arch>x86_64</arch>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <vcpu max='240'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <iothreads supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <os supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <enum name='firmware'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <loader supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>rom</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pflash</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='readonly'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>yes</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>no</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='secure'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>no</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </loader>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </os>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <cpu>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <mode name='host-passthrough' supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='hostPassthroughMigratable'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>on</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>off</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </mode>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <mode name='maximum' supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='maximumMigratable'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>on</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>off</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </mode>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <mode name='host-model' supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <vendor>AMD</vendor>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='x2apic'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='tsc-deadline'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='hypervisor'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='tsc_adjust'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='spec-ctrl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='stibp'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='ssbd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='cmp_legacy'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='overflow-recov'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='succor'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='ibrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='amd-ssbd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='virt-ssbd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='lbrv'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='tsc-scale'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='vmcb-clean'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='flushbyasid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='pause-filter'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='pfthreshold'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='svme-addr-chk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <feature policy='disable' name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </mode>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <mode name='custom' supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-noTSX'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Broadwell-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cascadelake-Server-v5'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cooperlake'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cooperlake-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Cooperlake-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Denverton'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mpx'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Denverton-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mpx'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Denverton-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Denverton-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Dhyana-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Genoa'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amd-psfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='auto-ibrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='stibp-always-on'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Genoa-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amd-psfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='auto-ibrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='stibp-always-on'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Milan'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Milan-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Milan-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amd-psfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='stibp-always-on'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Rome'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Rome-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Rome-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-Rome-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='EPYC-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='GraniteRapids'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='prefetchiti'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='GraniteRapids-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='prefetchiti'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='GraniteRapids-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx10'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx10-128'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx10-256'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx10-512'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='prefetchiti'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-noTSX'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Haswell-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-noTSX'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v5'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v6'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Icelake-Server-v7'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='IvyBridge'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='IvyBridge-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='IvyBridge-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='IvyBridge-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='KnightsMill'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-4fmaps'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-4vnniw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512er'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512pf'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='KnightsMill-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-4fmaps'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-4vnniw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512er'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512pf'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Opteron_G4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fma4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xop'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Opteron_G4-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fma4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xop'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Opteron_G5'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fma4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tbm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xop'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Opteron_G5-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fma4'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tbm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xop'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SapphireRapids'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SapphireRapids-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SapphireRapids-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SapphireRapids-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='amx-tile'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-bf16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-fp16'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bitalg'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrc'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fzrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='la57'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='taa-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xfd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SierraForest'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-ne-convert'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cmpccxadd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='SierraForest-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-ifma'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-ne-convert'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx-vnni-int8'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cmpccxadd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fbsdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='fsrs'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ibrs-all'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mcdt-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pbrsb-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='psdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='serialize'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vaes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Client-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='hle'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='rtm'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v3'>
Dec 09 10:39:13 compute-0 sudo[189406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fojmgwylknadswksxuyikjaaqsqamkpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276753.5041475-1537-278739004460654/AnsiballZ_systemd.py'
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Skylake-Server-v5'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512bw'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512cd'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512dq'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512f'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='avx512vl'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='invpcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pcid'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='pku'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='core-capability'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mpx'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='split-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='core-capability'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='mpx'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='split-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge-v2'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='core-capability'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='split-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge-v3'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='core-capability'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='split-lock-detect'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='Snowridge-v4'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='cldemote'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='erms'/>
Dec 09 10:39:13 compute-0 sudo[189406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='gfni'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdir64b'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='movdiri'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='xsaves'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='athlon'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnow'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnowext'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='athlon-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnow'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnowext'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='core2duo'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='core2duo-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='coreduo'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='coreduo-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='n270'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='n270-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='ss'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='phenom'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnow'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnowext'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <blockers model='phenom-v1'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnow'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <feature name='3dnowext'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </blockers>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </mode>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </cpu>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <memoryBacking supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <enum name='sourceType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>file</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>anonymous</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <value>memfd</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </memoryBacking>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <devices>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <disk supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='diskDevice'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>disk</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>cdrom</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>floppy</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>lun</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='bus'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>ide</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>fdc</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>scsi</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>usb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>sata</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio-transitional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio-non-transitional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </disk>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <graphics supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vnc</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>egl-headless</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>dbus</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </graphics>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <video supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='modelType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vga</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>cirrus</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>none</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>bochs</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>ramfb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </video>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <hostdev supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='mode'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>subsystem</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='startupPolicy'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>default</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>mandatory</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>requisite</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>optional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='subsysType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>usb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pci</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>scsi</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='capsType'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='pciBackend'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </hostdev>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <rng supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio-transitional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtio-non-transitional</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendModel'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>random</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>egd</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>builtin</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </rng>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <filesystem supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='driverType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>path</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>handle</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>virtiofs</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </filesystem>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <tpm supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tpm-tis</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tpm-crb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendModel'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>emulator</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>external</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendVersion'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>2.0</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </tpm>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <redirdev supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='bus'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>usb</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </redirdev>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <channel supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pty</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>unix</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </channel>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <crypto supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>qemu</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendModel'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>builtin</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </crypto>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <interface supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='backendType'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>default</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>passt</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </interface>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <panic supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='model'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>isa</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>hyperv</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </panic>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <console supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='type'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>null</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vc</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pty</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>dev</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>file</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>pipe</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>stdio</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>udp</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tcp</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>unix</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>qemu-vdagent</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>dbus</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </console>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </devices>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   <features>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <gic supported='no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <vmcoreinfo supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <genid supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <backingStoreInput supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <backup supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <async-teardown supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <ps2 supported='yes'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <sev supported='no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <sgx supported='no'/>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <hyperv supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='features'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>relaxed</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vapic</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>spinlocks</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vpindex</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>runtime</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>synic</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>stimer</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>reset</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>vendor_id</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>frequencies</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>reenlightenment</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tlbflush</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>ipi</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>avic</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>emsr_bitmap</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>xmm_input</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <defaults>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <spinlocks>4095</spinlocks>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <stimer_direct>on</stimer_direct>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <tlbflush_direct>on</tlbflush_direct>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <tlbflush_extended>on</tlbflush_extended>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </defaults>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </hyperv>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     <launchSecurity supported='yes'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       <enum name='sectype'>
Dec 09 10:39:13 compute-0 nova_compute[188553]:         <value>tdx</value>
Dec 09 10:39:13 compute-0 nova_compute[188553]:       </enum>
Dec 09 10:39:13 compute-0 nova_compute[188553]:     </launchSecurity>
Dec 09 10:39:13 compute-0 nova_compute[188553]:   </features>
Dec 09 10:39:13 compute-0 nova_compute[188553]: </domainCapabilities>
Dec 09 10:39:13 compute-0 nova_compute[188553]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.765 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.766 188558 INFO nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Secure Boot support detected
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.769 188558 INFO nova.virt.libvirt.driver [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.769 188558 INFO nova.virt.libvirt.driver [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.785 188558 DEBUG nova.virt.libvirt.driver [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.827 188558 INFO nova.virt.node [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Determined node identity cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from /var/lib/nova/compute_id
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.856 188558 WARNING nova.compute.manager [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Compute nodes ['cdc1168d-33c9-4d2c-8f23-1b695a68afd0'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.890 188558 INFO nova.compute.manager [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.943 188558 WARNING nova.compute.manager [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.943 188558 DEBUG oslo_concurrency.lockutils [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.944 188558 DEBUG oslo_concurrency.lockutils [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.944 188558 DEBUG oslo_concurrency.lockutils [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:39:13 compute-0 nova_compute[188553]: 2025-12-09 10:39:13.945 188558 DEBUG nova.compute.resource_tracker [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:39:13 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 09 10:39:14 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 09 10:39:14 compute-0 python3.9[189408]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:39:14 compute-0 systemd[1]: Stopping nova_compute container...
Dec 09 10:39:14 compute-0 nova_compute[188553]: 2025-12-09 10:39:14.281 188558 WARNING nova.virt.libvirt.driver [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:39:14 compute-0 nova_compute[188553]: 2025-12-09 10:39:14.282 188558 DEBUG nova.compute.resource_tracker [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6043MB free_disk=72.40962219238281GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:39:14 compute-0 nova_compute[188553]: 2025-12-09 10:39:14.282 188558 DEBUG oslo_concurrency.lockutils [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:39:14 compute-0 nova_compute[188553]: 2025-12-09 10:39:14.282 188558 DEBUG oslo_concurrency.lockutils [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:39:14 compute-0 virtqemud[189118]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 09 10:39:14 compute-0 virtqemud[189118]: hostname: compute-0
Dec 09 10:39:14 compute-0 virtqemud[189118]: End of file while reading data: Input/output error
Dec 09 10:39:14 compute-0 systemd[1]: libpod-2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14.scope: Deactivated successfully.
Dec 09 10:39:14 compute-0 systemd[1]: libpod-2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14.scope: Consumed 2.819s CPU time.
Dec 09 10:39:14 compute-0 podman[189435]: 2025-12-09 10:39:14.310188229 +0000 UTC m=+0.077435855 container died 2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, managed_by=edpm_ansible)
Dec 09 10:39:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14-userdata-shm.mount: Deactivated successfully.
Dec 09 10:39:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e-merged.mount: Deactivated successfully.
Dec 09 10:39:14 compute-0 podman[189435]: 2025-12-09 10:39:14.361422954 +0000 UTC m=+0.128670580 container cleanup 2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 09 10:39:14 compute-0 podman[189435]: nova_compute
Dec 09 10:39:14 compute-0 podman[189464]: nova_compute
Dec 09 10:39:14 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 09 10:39:14 compute-0 systemd[1]: Stopped nova_compute container.
Dec 09 10:39:14 compute-0 systemd[1]: Starting nova_compute container...
Dec 09 10:39:14 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 09 10:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 09 10:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 09 10:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 09 10:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 09 10:39:14 compute-0 podman[189477]: 2025-12-09 10:39:14.583169341 +0000 UTC m=+0.123195317 container init 2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=nova_compute, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:39:14 compute-0 podman[189477]: 2025-12-09 10:39:14.590577708 +0000 UTC m=+0.130603664 container start 2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 09 10:39:14 compute-0 podman[189477]: nova_compute
Dec 09 10:39:14 compute-0 nova_compute[189493]: + sudo -E kolla_set_configs
Dec 09 10:39:14 compute-0 systemd[1]: Started nova_compute container.
Dec 09 10:39:14 compute-0 sudo[189406]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Validating config file
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Copying service configuration files
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Deleting /etc/ceph
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Creating directory /etc/ceph
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Setting permission for /etc/ceph
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Writing out command to execute
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 09 10:39:14 compute-0 nova_compute[189493]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 09 10:39:14 compute-0 nova_compute[189493]: ++ cat /run_command
Dec 09 10:39:14 compute-0 nova_compute[189493]: + CMD=nova-compute
Dec 09 10:39:14 compute-0 nova_compute[189493]: + ARGS=
Dec 09 10:39:14 compute-0 nova_compute[189493]: + sudo kolla_copy_cacerts
Dec 09 10:39:14 compute-0 nova_compute[189493]: + [[ ! -n '' ]]
Dec 09 10:39:14 compute-0 nova_compute[189493]: + . kolla_extend_start
Dec 09 10:39:14 compute-0 nova_compute[189493]: Running command: 'nova-compute'
Dec 09 10:39:14 compute-0 nova_compute[189493]: + echo 'Running command: '\''nova-compute'\'''
Dec 09 10:39:14 compute-0 nova_compute[189493]: + umask 0022
Dec 09 10:39:14 compute-0 nova_compute[189493]: + exec nova-compute
Dec 09 10:39:15 compute-0 sudo[189654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iandqdrlhvfwmmyoolcqrxsguiotefvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276754.8497984-1546-15226088796845/AnsiballZ_podman_container.py'
Dec 09 10:39:15 compute-0 sudo[189654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:15 compute-0 python3.9[189656]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 09 10:39:15 compute-0 systemd[1]: Started libpod-conmon-73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e.scope.
Dec 09 10:39:15 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad08c0d724bb2a32e3cc93fc7084ed5878057cc41ef5149a8ec0a2e82512589/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 09 10:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad08c0d724bb2a32e3cc93fc7084ed5878057cc41ef5149a8ec0a2e82512589/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 09 10:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad08c0d724bb2a32e3cc93fc7084ed5878057cc41ef5149a8ec0a2e82512589/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 09 10:39:15 compute-0 podman[189679]: 2025-12-09 10:39:15.653538033 +0000 UTC m=+0.142258817 container init 73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 09 10:39:15 compute-0 podman[189679]: 2025-12-09 10:39:15.659843319 +0000 UTC m=+0.148564073 container start 73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec 09 10:39:15 compute-0 python3.9[189656]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 09 10:39:15 compute-0 nova_compute_init[189701]: INFO:nova_statedir:Applying nova statedir ownership
Dec 09 10:39:15 compute-0 nova_compute_init[189701]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 09 10:39:15 compute-0 nova_compute_init[189701]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 09 10:39:15 compute-0 nova_compute_init[189701]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 09 10:39:15 compute-0 nova_compute_init[189701]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 09 10:39:15 compute-0 nova_compute_init[189701]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 09 10:39:15 compute-0 nova_compute_init[189701]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 09 10:39:15 compute-0 nova_compute_init[189701]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 09 10:39:15 compute-0 nova_compute_init[189701]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 09 10:39:15 compute-0 nova_compute_init[189701]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 09 10:39:15 compute-0 nova_compute_init[189701]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 09 10:39:15 compute-0 nova_compute_init[189701]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 09 10:39:15 compute-0 nova_compute_init[189701]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 09 10:39:15 compute-0 nova_compute_init[189701]: INFO:nova_statedir:Nova statedir ownership complete
Dec 09 10:39:15 compute-0 systemd[1]: libpod-73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e.scope: Deactivated successfully.
Dec 09 10:39:15 compute-0 podman[189715]: 2025-12-09 10:39:15.76592386 +0000 UTC m=+0.026799777 container died 73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute_init, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible)
Dec 09 10:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e-userdata-shm.mount: Deactivated successfully.
Dec 09 10:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-bad08c0d724bb2a32e3cc93fc7084ed5878057cc41ef5149a8ec0a2e82512589-merged.mount: Deactivated successfully.
Dec 09 10:39:15 compute-0 podman[189715]: 2025-12-09 10:39:15.800065529 +0000 UTC m=+0.060941366 container cleanup 73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec 09 10:39:15 compute-0 systemd[1]: libpod-conmon-73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e.scope: Deactivated successfully.
Dec 09 10:39:15 compute-0 sudo[189654]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:16 compute-0 sshd-session[161337]: Connection closed by 192.168.122.30 port 44538
Dec 09 10:39:16 compute-0 sshd-session[161334]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:39:16 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Dec 09 10:39:16 compute-0 systemd[1]: session-24.scope: Consumed 2min 3.628s CPU time.
Dec 09 10:39:16 compute-0 systemd-logind[806]: Session 24 logged out. Waiting for processes to exit.
Dec 09 10:39:16 compute-0 systemd-logind[806]: Removed session 24.
Dec 09 10:39:16 compute-0 nova_compute[189493]: 2025-12-09 10:39:16.654 189497 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 09 10:39:16 compute-0 nova_compute[189493]: 2025-12-09 10:39:16.655 189497 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 09 10:39:16 compute-0 nova_compute[189493]: 2025-12-09 10:39:16.655 189497 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 09 10:39:16 compute-0 nova_compute[189493]: 2025-12-09 10:39:16.655 189497 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 09 10:39:16 compute-0 nova_compute[189493]: 2025-12-09 10:39:16.801 189497 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:39:16 compute-0 nova_compute[189493]: 2025-12-09 10:39:16.830 189497 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:39:16 compute-0 nova_compute[189493]: 2025-12-09 10:39:16.831 189497 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 09 10:39:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:39:16.965 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:39:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:39:16.966 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:39:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:39:16.966 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.384 189497 INFO nova.virt.driver [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.515 189497 INFO nova.compute.provider_config [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.528 189497 DEBUG oslo_concurrency.lockutils [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.529 189497 DEBUG oslo_concurrency.lockutils [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.529 189497 DEBUG oslo_concurrency.lockutils [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.529 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.529 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.529 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.530 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.530 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.530 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.530 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.530 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.530 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.530 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.530 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.531 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.531 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.531 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.531 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.531 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.532 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.532 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.532 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.532 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.532 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.532 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.533 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.533 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.533 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.533 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.533 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.534 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.534 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.534 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.534 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.535 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.535 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.535 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.535 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.535 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.535 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.535 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.536 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.536 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.536 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.536 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.536 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.536 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.537 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.537 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.537 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.537 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.537 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.537 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.538 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.538 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.538 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.538 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.538 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.538 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.538 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.539 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.539 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.539 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.539 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.539 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.539 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.540 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.540 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.540 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.540 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.540 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.540 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.540 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.541 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.541 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.541 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.541 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.541 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.541 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.542 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.542 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.542 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.542 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.542 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.542 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.543 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.543 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.543 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.543 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.543 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.543 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.543 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.544 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.544 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.544 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.544 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.544 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.544 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.544 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.545 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.545 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.545 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.545 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.545 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.545 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.545 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.545 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.546 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.546 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.546 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.546 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.546 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.546 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.546 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.546 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.547 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.547 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.547 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.547 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.547 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.547 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.547 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.548 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.548 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.548 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.548 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.548 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.548 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.548 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.548 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.549 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.549 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.549 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.549 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.549 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.549 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.549 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.550 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.550 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.550 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.550 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.550 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.550 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.550 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.550 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.551 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.551 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.551 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.551 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.551 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.551 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.551 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.552 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.552 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.552 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.552 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.552 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.552 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.552 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.553 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.553 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.553 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.553 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.553 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.553 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.553 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.554 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.554 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.554 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.554 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.554 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.554 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.554 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.555 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.555 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.555 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.555 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.555 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.555 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.555 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.556 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.556 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.556 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.556 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.556 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.556 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.556 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.557 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.557 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.557 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.557 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.557 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.557 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.557 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.558 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.558 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.558 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.558 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.558 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.558 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.558 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.559 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.559 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.559 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.559 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.559 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.560 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.560 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.560 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.560 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.560 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.560 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.560 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.561 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.561 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.561 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.561 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.561 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.561 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.561 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.562 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.562 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.562 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.562 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.562 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.562 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.563 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.563 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.563 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.563 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.563 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.563 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.563 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.564 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.564 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.564 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.564 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.564 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.564 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.564 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.565 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.565 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.565 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.565 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.565 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.565 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.566 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.566 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.566 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.566 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.566 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.566 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.566 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.566 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.567 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.567 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.567 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.567 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.567 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.567 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.567 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.568 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.568 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.568 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.568 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.568 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.568 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.569 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.569 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.569 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.569 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.569 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.569 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.569 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.570 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.570 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.570 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.570 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.570 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.570 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.571 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.571 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.571 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.571 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.571 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.571 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.572 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.572 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.572 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.572 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.572 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.572 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.572 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.573 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.573 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.573 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.573 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.573 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.573 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.573 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.574 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.574 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.574 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.574 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.574 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.574 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.574 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.575 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.575 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.575 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.575 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.575 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.575 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.576 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.576 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.576 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.576 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.576 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.576 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.577 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.577 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.577 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.577 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.577 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.577 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.577 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.578 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.578 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.578 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.578 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.578 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.578 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.578 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.579 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.579 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.579 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.579 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.579 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.579 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.579 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.580 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.580 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.580 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.580 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.580 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.580 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.580 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.581 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.581 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.581 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.581 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.581 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.581 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.581 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.582 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.582 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.582 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.582 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.582 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.583 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.583 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.583 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.583 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.583 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.583 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.584 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.584 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.584 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.584 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.584 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.584 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.584 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.585 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.585 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.585 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.585 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.585 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.585 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.585 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.585 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.586 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.586 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.586 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.586 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.586 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.586 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.586 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.587 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.587 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.587 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.587 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.587 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.587 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.587 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.588 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.588 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.588 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.588 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.588 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.588 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.589 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.589 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.589 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.589 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.589 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.589 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.590 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.590 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.590 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.590 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.590 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.590 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.590 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.591 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.591 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.591 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.591 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.591 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.591 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.592 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.592 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.592 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.592 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.592 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.592 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.592 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.593 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.593 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.593 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.593 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.593 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.593 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.593 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.594 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.594 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.594 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.594 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.594 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.594 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.595 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.595 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.595 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.595 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.595 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.595 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.595 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.595 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.596 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.596 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.596 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.596 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.596 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.597 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.597 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.597 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.597 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.597 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.597 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.597 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.598 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.598 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.598 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.598 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.598 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.598 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.598 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.599 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.599 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.599 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.599 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.599 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.599 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.600 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.600 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.600 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.600 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.600 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.600 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.600 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.601 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.601 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.601 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.601 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.601 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.601 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.601 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.602 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.602 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.602 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.602 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.602 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.602 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.602 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.603 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.603 189497 WARNING oslo_config.cfg [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 09 10:39:17 compute-0 nova_compute[189493]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 09 10:39:17 compute-0 nova_compute[189493]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 09 10:39:17 compute-0 nova_compute[189493]: and ``live_migration_inbound_addr`` respectively.
Dec 09 10:39:17 compute-0 nova_compute[189493]: ).  Its value may be silently ignored in the future.
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.603 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.603 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.603 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.603 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.604 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.604 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.604 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.604 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.604 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.604 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.604 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.605 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.605 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.605 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.605 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.605 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.605 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.606 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.606 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.606 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.606 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.606 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.606 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.606 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.607 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.607 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.607 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.607 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.607 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.607 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.607 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.608 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.608 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.608 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.608 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.608 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.608 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.609 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.609 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.609 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.609 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.609 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.609 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.610 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.610 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.610 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.610 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.610 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.610 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.611 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.611 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.611 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.611 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.611 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.611 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.611 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.612 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.612 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.612 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.612 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.612 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.612 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.612 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.613 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.613 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.613 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.613 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.613 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.613 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.613 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.614 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.614 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.614 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.614 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.614 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.615 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.615 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.615 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.615 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.615 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.615 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.616 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.616 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.616 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.616 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.616 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.616 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.616 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.617 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.617 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.617 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.617 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.617 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.617 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.617 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.617 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.618 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.618 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.618 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.618 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.618 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.618 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.618 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.619 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.619 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.619 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.619 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.619 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.619 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.619 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.620 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.620 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.620 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.620 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.620 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.620 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.620 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.620 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.621 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.621 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.621 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.621 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.621 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.621 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.622 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.622 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.622 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.622 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.622 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.622 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.622 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.623 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.623 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.623 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.623 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.623 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.623 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.623 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.624 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.624 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.624 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.624 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.624 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.624 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.625 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.625 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.625 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.625 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.625 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.625 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.625 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.626 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.626 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.626 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.626 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.626 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.626 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.626 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.627 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.627 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.627 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.627 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.627 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.627 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.627 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.628 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.628 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.628 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.628 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.628 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.628 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.629 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.629 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.629 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.629 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.629 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.629 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.629 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.630 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.630 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.630 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.630 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.630 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.630 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.630 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.631 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.631 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.631 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.631 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.631 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.631 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.631 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.632 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.632 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.632 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.632 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.632 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.632 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.633 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.633 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.633 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.633 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.633 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.633 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.633 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.634 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.634 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.634 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.634 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.634 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.634 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.634 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.635 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.635 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.635 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.635 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.635 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.635 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.635 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.636 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.636 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.636 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.636 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.636 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.637 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.637 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.637 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.637 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.637 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.637 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.637 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.638 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.638 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.638 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.638 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.638 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.638 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.638 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.639 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.639 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.639 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.639 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.639 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.639 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.639 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.640 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.640 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.640 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.640 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.640 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.640 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.641 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.641 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.641 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.641 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.641 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.641 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.642 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.642 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.642 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.642 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.642 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.642 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.642 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.642 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.643 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.643 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.643 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.643 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.643 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.643 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.643 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.644 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.644 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.644 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.644 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.644 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.644 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.645 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.645 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.645 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.645 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.645 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.645 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.645 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.646 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.646 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.646 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.646 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.646 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.646 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.647 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.647 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.647 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.647 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.647 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.647 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.648 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.648 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.648 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.648 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.648 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.649 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.649 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.649 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.649 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.649 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.649 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.649 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.650 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.650 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.650 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.650 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.650 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.650 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.650 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.651 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.651 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.651 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.651 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.651 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.651 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.651 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.652 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.652 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.652 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.652 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.652 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.652 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.653 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.653 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.653 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.653 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.653 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.653 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.653 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.654 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.654 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.654 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.654 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.654 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.654 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.654 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.655 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.655 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.655 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.655 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.655 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.655 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.655 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.656 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.656 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.656 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.656 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.656 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.656 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.657 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.657 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.657 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.657 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.657 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.657 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.658 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.658 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.658 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.658 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.658 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.658 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.658 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.659 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.659 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.659 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.659 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.659 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.659 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.660 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.660 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.660 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.660 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.660 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.660 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.661 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.661 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.661 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.661 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.661 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.662 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.662 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.662 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.662 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.662 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.662 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.663 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.663 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.663 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.663 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.663 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.664 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.664 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.664 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.664 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.664 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.665 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.665 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.665 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.665 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.665 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.665 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.665 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.666 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.666 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.666 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.666 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.666 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.666 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.667 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.667 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.667 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.667 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.667 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.667 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.667 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.667 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.668 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.668 189497 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.684 189497 INFO nova.virt.node [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Determined node identity cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from /var/lib/nova/compute_id
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.684 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.685 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.685 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.686 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.700 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f623b10de50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.704 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f623b10de50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.704 189497 INFO nova.virt.libvirt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Connection event '1' reason 'None'
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.712 189497 INFO nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Libvirt host capabilities <capabilities>
Dec 09 10:39:17 compute-0 nova_compute[189493]: 
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <host>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <uuid>6aaf5123-0bdb-461d-92bb-b40c4bea282b</uuid>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <cpu>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <arch>x86_64</arch>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model>EPYC-Rome-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <vendor>AMD</vendor>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <microcode version='16777317'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <signature family='23' model='49' stepping='0'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='x2apic'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='tsc-deadline'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='osxsave'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='hypervisor'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='tsc_adjust'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='spec-ctrl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='stibp'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='arch-capabilities'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='ssbd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='cmp_legacy'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='topoext'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='virt-ssbd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='lbrv'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='tsc-scale'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='vmcb-clean'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='pause-filter'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='pfthreshold'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='svme-addr-chk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='rdctl-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='skip-l1dfl-vmentry'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='mds-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature name='pschange-mc-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <pages unit='KiB' size='4'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <pages unit='KiB' size='2048'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <pages unit='KiB' size='1048576'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </cpu>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <power_management>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <suspend_mem/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <suspend_disk/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <suspend_hybrid/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </power_management>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <iommu support='no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <migration_features>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <live/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <uri_transports>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <uri_transport>tcp</uri_transport>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <uri_transport>rdma</uri_transport>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </uri_transports>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </migration_features>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <topology>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <cells num='1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <cell id='0'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:           <memory unit='KiB'>7864304</memory>
Dec 09 10:39:17 compute-0 nova_compute[189493]:           <pages unit='KiB' size='4'>1966076</pages>
Dec 09 10:39:17 compute-0 nova_compute[189493]:           <pages unit='KiB' size='2048'>0</pages>
Dec 09 10:39:17 compute-0 nova_compute[189493]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 09 10:39:17 compute-0 nova_compute[189493]:           <distances>
Dec 09 10:39:17 compute-0 nova_compute[189493]:             <sibling id='0' value='10'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:           </distances>
Dec 09 10:39:17 compute-0 nova_compute[189493]:           <cpus num='8'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:           </cpus>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         </cell>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </cells>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </topology>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <cache>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </cache>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <secmodel>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model>selinux</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <doi>0</doi>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </secmodel>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <secmodel>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model>dac</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <doi>0</doi>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </secmodel>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </host>
Dec 09 10:39:17 compute-0 nova_compute[189493]: 
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <guest>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <os_type>hvm</os_type>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <arch name='i686'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <wordsize>32</wordsize>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <domain type='qemu'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <domain type='kvm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </arch>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <features>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <pae/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <nonpae/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <acpi default='on' toggle='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <apic default='on' toggle='no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <cpuselection/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <deviceboot/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <disksnapshot default='on' toggle='no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <externalSnapshot/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </features>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </guest>
Dec 09 10:39:17 compute-0 nova_compute[189493]: 
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <guest>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <os_type>hvm</os_type>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <arch name='x86_64'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <wordsize>64</wordsize>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <domain type='qemu'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <domain type='kvm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </arch>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <features>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <acpi default='on' toggle='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <apic default='on' toggle='no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <cpuselection/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <deviceboot/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <disksnapshot default='on' toggle='no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <externalSnapshot/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </features>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </guest>
Dec 09 10:39:17 compute-0 nova_compute[189493]: 
Dec 09 10:39:17 compute-0 nova_compute[189493]: </capabilities>
Dec 09 10:39:17 compute-0 nova_compute[189493]: 
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.722 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.724 189497 DEBUG nova.virt.libvirt.volume.mount [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.728 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 09 10:39:17 compute-0 nova_compute[189493]: <domainCapabilities>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <path>/usr/libexec/qemu-kvm</path>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <domain>kvm</domain>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <arch>i686</arch>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <vcpu max='240'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <iothreads supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <os supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <enum name='firmware'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <loader supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>rom</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>pflash</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='readonly'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>yes</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>no</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='secure'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>no</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </loader>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </os>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <cpu>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <mode name='host-passthrough' supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='hostPassthroughMigratable'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>on</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>off</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </mode>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <mode name='maximum' supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='maximumMigratable'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>on</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>off</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </mode>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <mode name='host-model' supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <vendor>AMD</vendor>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='x2apic'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='tsc-deadline'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='hypervisor'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='tsc_adjust'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='spec-ctrl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='stibp'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='ssbd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='cmp_legacy'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='overflow-recov'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='succor'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='ibrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='amd-ssbd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='virt-ssbd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='lbrv'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='tsc-scale'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='vmcb-clean'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='flushbyasid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='pause-filter'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='pfthreshold'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='svme-addr-chk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='disable' name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </mode>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <mode name='custom' supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-noTSX'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v5'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cooperlake'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cooperlake-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cooperlake-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Denverton'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mpx'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Denverton-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mpx'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Denverton-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Denverton-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Dhyana-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Genoa'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amd-psfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='auto-ibrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='stibp-always-on'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Genoa-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amd-psfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='auto-ibrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='stibp-always-on'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Milan'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Milan-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Milan-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amd-psfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='stibp-always-on'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Rome'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Rome-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Rome-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Rome-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='GraniteRapids'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='prefetchiti'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='GraniteRapids-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='prefetchiti'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='GraniteRapids-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx10'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx10-128'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx10-256'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx10-512'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='prefetchiti'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-noTSX'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-noTSX'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v5'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v6'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v7'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='IvyBridge'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='IvyBridge-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='IvyBridge-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='IvyBridge-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='KnightsMill'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-4fmaps'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-4vnniw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512er'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512pf'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='KnightsMill-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-4fmaps'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-4vnniw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512er'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512pf'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Opteron_G4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fma4'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xop'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Opteron_G4-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fma4'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xop'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Opteron_G5'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fma4'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tbm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xop'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Opteron_G5-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fma4'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tbm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xop'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SapphireRapids'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SapphireRapids-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SapphireRapids-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SapphireRapids-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SierraForest'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-ne-convert'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cmpccxadd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SierraForest-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-ne-convert'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cmpccxadd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v5'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Snowridge'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='core-capability'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mpx'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='split-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Snowridge-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='core-capability'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mpx'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='split-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Snowridge-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='core-capability'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='split-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Snowridge-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='core-capability'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='split-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Snowridge-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='athlon'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnow'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnowext'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='athlon-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnow'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnowext'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='core2duo'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='core2duo-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='coreduo'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='coreduo-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='n270'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='n270-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='phenom'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnow'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnowext'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='phenom-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnow'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnowext'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </mode>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </cpu>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <memoryBacking supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <enum name='sourceType'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <value>file</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <value>anonymous</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <value>memfd</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </memoryBacking>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <devices>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <disk supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='diskDevice'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>disk</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>cdrom</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>floppy</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>lun</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='bus'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>ide</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>fdc</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>scsi</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>usb</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>sata</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='model'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio-transitional</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio-non-transitional</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </disk>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <graphics supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vnc</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>egl-headless</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>dbus</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </graphics>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <video supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='modelType'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vga</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>cirrus</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>none</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>bochs</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>ramfb</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </video>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <hostdev supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='mode'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>subsystem</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='startupPolicy'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>default</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>mandatory</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>requisite</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>optional</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='subsysType'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>usb</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>pci</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>scsi</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='capsType'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='pciBackend'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </hostdev>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <rng supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='model'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio-transitional</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio-non-transitional</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='backendModel'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>random</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>egd</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>builtin</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </rng>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <filesystem supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='driverType'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>path</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>handle</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtiofs</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </filesystem>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <tpm supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='model'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>tpm-tis</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>tpm-crb</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='backendModel'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>emulator</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>external</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='backendVersion'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>2.0</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </tpm>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <redirdev supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='bus'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>usb</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </redirdev>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <channel supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>pty</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>unix</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </channel>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <crypto supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='model'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>qemu</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='backendModel'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>builtin</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </crypto>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <interface supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='backendType'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>default</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>passt</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </interface>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <panic supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='model'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>isa</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>hyperv</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </panic>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <console supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>null</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vc</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>pty</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>dev</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>file</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>pipe</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>stdio</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>udp</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>tcp</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>unix</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>qemu-vdagent</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>dbus</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </console>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </devices>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <features>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <gic supported='no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <vmcoreinfo supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <genid supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <backingStoreInput supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <backup supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <async-teardown supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <ps2 supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <sev supported='no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <sgx supported='no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <hyperv supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='features'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>relaxed</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vapic</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>spinlocks</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vpindex</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>runtime</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>synic</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>stimer</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>reset</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vendor_id</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>frequencies</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>reenlightenment</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>tlbflush</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>ipi</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>avic</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>emsr_bitmap</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>xmm_input</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <defaults>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <spinlocks>4095</spinlocks>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <stimer_direct>on</stimer_direct>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <tlbflush_direct>on</tlbflush_direct>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <tlbflush_extended>on</tlbflush_extended>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </defaults>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </hyperv>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <launchSecurity supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='sectype'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>tdx</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </launchSecurity>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </features>
Dec 09 10:39:17 compute-0 nova_compute[189493]: </domainCapabilities>
Dec 09 10:39:17 compute-0 nova_compute[189493]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.735 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 09 10:39:17 compute-0 nova_compute[189493]: <domainCapabilities>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <path>/usr/libexec/qemu-kvm</path>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <domain>kvm</domain>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <arch>i686</arch>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <vcpu max='4096'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <iothreads supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <os supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <enum name='firmware'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <loader supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>rom</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>pflash</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='readonly'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>yes</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>no</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='secure'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>no</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </loader>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </os>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <cpu>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <mode name='host-passthrough' supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='hostPassthroughMigratable'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>on</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>off</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </mode>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <mode name='maximum' supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='maximumMigratable'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>on</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>off</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </mode>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <mode name='host-model' supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <vendor>AMD</vendor>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='x2apic'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='tsc-deadline'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='hypervisor'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='tsc_adjust'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='spec-ctrl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='stibp'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='ssbd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='cmp_legacy'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='overflow-recov'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='succor'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='ibrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='amd-ssbd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='virt-ssbd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='lbrv'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='tsc-scale'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='vmcb-clean'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='flushbyasid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='pause-filter'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='pfthreshold'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='svme-addr-chk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='disable' name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </mode>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <mode name='custom' supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-noTSX'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v5'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cooperlake'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cooperlake-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cooperlake-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Denverton'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mpx'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Denverton-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mpx'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Denverton-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Denverton-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Dhyana-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Genoa'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amd-psfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='auto-ibrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='stibp-always-on'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Genoa-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amd-psfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='auto-ibrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='stibp-always-on'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Milan'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Milan-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Milan-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amd-psfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='stibp-always-on'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Rome'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Rome-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Rome-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Rome-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='GraniteRapids'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='prefetchiti'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='GraniteRapids-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='prefetchiti'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='GraniteRapids-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx10'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx10-128'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx10-256'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx10-512'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='prefetchiti'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-noTSX'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-noTSX'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v5'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v6'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v7'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='IvyBridge'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='IvyBridge-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='IvyBridge-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='IvyBridge-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='KnightsMill'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-4fmaps'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-4vnniw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512er'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512pf'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='KnightsMill-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-4fmaps'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-4vnniw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512er'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512pf'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Opteron_G4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fma4'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xop'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Opteron_G4-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fma4'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xop'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Opteron_G5'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fma4'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tbm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xop'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Opteron_G5-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fma4'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tbm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xop'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SapphireRapids'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SapphireRapids-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SapphireRapids-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SapphireRapids-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SierraForest'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-ne-convert'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cmpccxadd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SierraForest-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-ne-convert'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cmpccxadd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v5'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Snowridge'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='core-capability'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mpx'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='split-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Snowridge-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='core-capability'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mpx'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='split-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Snowridge-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='core-capability'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='split-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Snowridge-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='core-capability'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='split-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Snowridge-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='athlon'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnow'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnowext'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='athlon-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnow'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnowext'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='core2duo'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='core2duo-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='coreduo'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='coreduo-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='n270'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='n270-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='phenom'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnow'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnowext'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='phenom-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnow'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnowext'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </mode>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </cpu>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <memoryBacking supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <enum name='sourceType'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <value>file</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <value>anonymous</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <value>memfd</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </memoryBacking>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <devices>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <disk supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='diskDevice'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>disk</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>cdrom</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>floppy</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>lun</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='bus'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>fdc</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>scsi</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>usb</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>sata</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='model'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio-transitional</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio-non-transitional</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </disk>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <graphics supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vnc</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>egl-headless</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>dbus</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </graphics>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <video supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='modelType'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vga</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>cirrus</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>none</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>bochs</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>ramfb</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </video>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <hostdev supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='mode'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>subsystem</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='startupPolicy'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>default</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>mandatory</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>requisite</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>optional</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='subsysType'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>usb</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>pci</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>scsi</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='capsType'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='pciBackend'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </hostdev>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <rng supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='model'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio-transitional</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio-non-transitional</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='backendModel'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>random</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>egd</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>builtin</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </rng>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <filesystem supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='driverType'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>path</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>handle</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtiofs</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </filesystem>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <tpm supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='model'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>tpm-tis</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>tpm-crb</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='backendModel'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>emulator</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>external</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='backendVersion'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>2.0</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </tpm>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <redirdev supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='bus'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>usb</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </redirdev>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <channel supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>pty</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>unix</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </channel>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <crypto supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='model'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>qemu</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='backendModel'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>builtin</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </crypto>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <interface supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='backendType'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>default</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>passt</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </interface>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <panic supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='model'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>isa</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>hyperv</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </panic>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <console supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>null</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vc</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>pty</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>dev</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>file</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>pipe</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>stdio</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>udp</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>tcp</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>unix</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>qemu-vdagent</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>dbus</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </console>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </devices>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <features>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <gic supported='no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <vmcoreinfo supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <genid supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <backingStoreInput supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <backup supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <async-teardown supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <ps2 supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <sev supported='no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <sgx supported='no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <hyperv supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='features'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>relaxed</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vapic</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>spinlocks</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vpindex</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>runtime</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>synic</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>stimer</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>reset</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vendor_id</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>frequencies</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>reenlightenment</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>tlbflush</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>ipi</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>avic</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>emsr_bitmap</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>xmm_input</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <defaults>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <spinlocks>4095</spinlocks>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <stimer_direct>on</stimer_direct>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <tlbflush_direct>on</tlbflush_direct>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <tlbflush_extended>on</tlbflush_extended>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </defaults>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </hyperv>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <launchSecurity supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='sectype'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>tdx</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </launchSecurity>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </features>
Dec 09 10:39:17 compute-0 nova_compute[189493]: </domainCapabilities>
Dec 09 10:39:17 compute-0 nova_compute[189493]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.784 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.789 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 09 10:39:17 compute-0 nova_compute[189493]: <domainCapabilities>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <path>/usr/libexec/qemu-kvm</path>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <domain>kvm</domain>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <arch>x86_64</arch>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <vcpu max='4096'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <iothreads supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <os supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <enum name='firmware'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <value>efi</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <loader supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>rom</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>pflash</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='readonly'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>yes</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>no</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='secure'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>yes</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>no</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </loader>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </os>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <cpu>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <mode name='host-passthrough' supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='hostPassthroughMigratable'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>on</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>off</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </mode>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <mode name='maximum' supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='maximumMigratable'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>on</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>off</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </mode>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <mode name='host-model' supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <vendor>AMD</vendor>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='x2apic'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='tsc-deadline'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='hypervisor'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='tsc_adjust'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='spec-ctrl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='stibp'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='ssbd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='cmp_legacy'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='overflow-recov'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='succor'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='ibrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='amd-ssbd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='virt-ssbd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='lbrv'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='tsc-scale'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='vmcb-clean'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='flushbyasid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='pause-filter'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='pfthreshold'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='svme-addr-chk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='disable' name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </mode>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <mode name='custom' supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-noTSX'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v5'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cooperlake'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cooperlake-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cooperlake-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Denverton'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mpx'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Denverton-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mpx'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Denverton-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Denverton-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Dhyana-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Genoa'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amd-psfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='auto-ibrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='stibp-always-on'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Genoa-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amd-psfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='auto-ibrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='stibp-always-on'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Milan'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Milan-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Milan-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amd-psfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='stibp-always-on'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Rome'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Rome-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Rome-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Rome-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='GraniteRapids'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='prefetchiti'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='GraniteRapids-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='prefetchiti'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='GraniteRapids-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx10'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx10-128'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx10-256'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx10-512'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='prefetchiti'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-noTSX'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Haswell-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-noTSX'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v5'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v6'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v7'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='IvyBridge'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='IvyBridge-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='IvyBridge-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='IvyBridge-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='KnightsMill'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-4fmaps'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-4vnniw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512er'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512pf'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='KnightsMill-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-4fmaps'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-4vnniw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512er'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512pf'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Opteron_G4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fma4'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xop'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Opteron_G4-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fma4'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xop'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Opteron_G5'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fma4'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tbm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xop'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Opteron_G5-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fma4'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tbm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xop'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SapphireRapids'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SapphireRapids-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SapphireRapids-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SapphireRapids-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SierraForest'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-ne-convert'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cmpccxadd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='SierraForest-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-ne-convert'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx-vnni-int8'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cmpccxadd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v5'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Snowridge'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='core-capability'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mpx'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='split-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Snowridge-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='core-capability'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mpx'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='split-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Snowridge-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='core-capability'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='split-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Snowridge-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='core-capability'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='split-lock-detect'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Snowridge-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='athlon'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnow'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnowext'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='athlon-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnow'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnowext'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='core2duo'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='core2duo-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='coreduo'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='coreduo-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='n270'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='n270-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='phenom'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnow'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnowext'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='phenom-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnow'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='3dnowext'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </mode>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </cpu>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <memoryBacking supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <enum name='sourceType'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <value>file</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <value>anonymous</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <value>memfd</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </memoryBacking>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <devices>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <disk supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='diskDevice'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>disk</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>cdrom</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>floppy</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>lun</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='bus'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>fdc</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>scsi</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>usb</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>sata</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='model'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio-transitional</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio-non-transitional</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </disk>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <graphics supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vnc</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>egl-headless</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>dbus</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </graphics>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <video supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='modelType'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vga</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>cirrus</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>none</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>bochs</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>ramfb</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </video>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <hostdev supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='mode'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>subsystem</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='startupPolicy'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>default</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>mandatory</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>requisite</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>optional</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='subsysType'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>usb</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>pci</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>scsi</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='capsType'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='pciBackend'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </hostdev>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <rng supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='model'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio-transitional</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtio-non-transitional</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='backendModel'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>random</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>egd</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>builtin</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </rng>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <filesystem supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='driverType'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>path</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>handle</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>virtiofs</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </filesystem>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <tpm supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='model'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>tpm-tis</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>tpm-crb</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='backendModel'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>emulator</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>external</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='backendVersion'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>2.0</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </tpm>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <redirdev supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='bus'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>usb</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </redirdev>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <channel supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>pty</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>unix</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </channel>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <crypto supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='model'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>qemu</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='backendModel'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>builtin</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </crypto>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <interface supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='backendType'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>default</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>passt</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </interface>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <panic supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='model'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>isa</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>hyperv</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </panic>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <console supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>null</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vc</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>pty</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>dev</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>file</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>pipe</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>stdio</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>udp</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>tcp</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>unix</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>qemu-vdagent</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>dbus</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </console>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </devices>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <features>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <gic supported='no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <vmcoreinfo supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <genid supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <backingStoreInput supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <backup supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <async-teardown supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <ps2 supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <sev supported='no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <sgx supported='no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <hyperv supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='features'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>relaxed</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vapic</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>spinlocks</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vpindex</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>runtime</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>synic</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>stimer</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>reset</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>vendor_id</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>frequencies</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>reenlightenment</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>tlbflush</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>ipi</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>avic</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>emsr_bitmap</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>xmm_input</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <defaults>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <spinlocks>4095</spinlocks>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <stimer_direct>on</stimer_direct>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <tlbflush_direct>on</tlbflush_direct>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <tlbflush_extended>on</tlbflush_extended>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </defaults>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </hyperv>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <launchSecurity supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='sectype'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>tdx</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </launchSecurity>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </features>
Dec 09 10:39:17 compute-0 nova_compute[189493]: </domainCapabilities>
Dec 09 10:39:17 compute-0 nova_compute[189493]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 09 10:39:17 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.877 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 09 10:39:17 compute-0 nova_compute[189493]: <domainCapabilities>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <path>/usr/libexec/qemu-kvm</path>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <domain>kvm</domain>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <arch>x86_64</arch>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <vcpu max='240'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <iothreads supported='yes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <os supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <enum name='firmware'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <loader supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>rom</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>pflash</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='readonly'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>yes</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>no</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='secure'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>no</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </loader>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   </os>
Dec 09 10:39:17 compute-0 nova_compute[189493]:   <cpu>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <mode name='host-passthrough' supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='hostPassthroughMigratable'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>on</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>off</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </mode>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <mode name='maximum' supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <enum name='maximumMigratable'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>on</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <value>off</value>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </mode>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <mode name='host-model' supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <vendor>AMD</vendor>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='x2apic'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='tsc-deadline'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='hypervisor'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='tsc_adjust'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='spec-ctrl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='stibp'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='ssbd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='cmp_legacy'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='overflow-recov'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='succor'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='ibrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='amd-ssbd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='virt-ssbd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='lbrv'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='tsc-scale'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='vmcb-clean'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='flushbyasid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='pause-filter'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='pfthreshold'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='svme-addr-chk'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <feature policy='disable' name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     </mode>
Dec 09 10:39:17 compute-0 nova_compute[189493]:     <mode name='custom' supported='yes'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-noTSX'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Broadwell-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v4'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cascadelake-Server-v5'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cooperlake'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cooperlake-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Cooperlake-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Denverton'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mpx'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Denverton-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='mpx'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Denverton-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Denverton-v3'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='Dhyana-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Genoa'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amd-psfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='auto-ibrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='stibp-always-on'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Genoa-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amd-psfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='auto-ibrs'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='stibp-always-on'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Milan'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Milan-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Milan-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='amd-psfd'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='no-nested-data-bp'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='null-sel-clr-base'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='stibp-always-on'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Rome'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Rome-v1'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <blockers model='EPYC-Rome-v2'>
Dec 09 10:39:17 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:17 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='EPYC-Rome-v3'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='EPYC-v3'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='EPYC-v4'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='GraniteRapids'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-fp16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='prefetchiti'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='GraniteRapids-v1'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-fp16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='prefetchiti'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='GraniteRapids-v2'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-fp16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx10'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx10-128'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx10-256'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx10-512'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='prefetchiti'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Haswell'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Haswell-IBRS'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Haswell-noTSX'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Haswell-v1'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Haswell-v2'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Haswell-v3'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Haswell-v4'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-noTSX'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v1'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v2'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v3'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v4'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v5'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v6'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Icelake-Server-v7'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='IvyBridge'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='IvyBridge-IBRS'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='IvyBridge-v1'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='IvyBridge-v2'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='KnightsMill'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-4fmaps'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-4vnniw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512er'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512pf'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='KnightsMill-v1'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-4fmaps'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-4vnniw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512er'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512pf'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Opteron_G4'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fma4'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xop'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Opteron_G4-v1'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fma4'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xop'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Opteron_G5'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fma4'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='tbm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xop'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Opteron_G5-v1'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fma4'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='tbm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xop'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='SapphireRapids'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='SapphireRapids-v1'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='SapphireRapids-v2'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='SapphireRapids-v3'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-bf16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-int8'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='amx-tile'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-bf16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-fp16'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512-vpopcntdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bitalg'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512ifma'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vbmi2'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrc'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fzrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='la57'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='taa-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='tsx-ldtrk'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xfd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='SierraForest'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx-ifma'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx-ne-convert'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx-vnni-int8'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='cmpccxadd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='SierraForest-v1'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx-ifma'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx-ne-convert'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx-vnni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx-vnni-int8'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='bus-lock-detect'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='cmpccxadd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fbsdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='fsrs'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ibrs-all'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='mcdt-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pbrsb-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='psdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='sbdr-ssdp-no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='serialize'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vaes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='vpclmulqdq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-IBRS'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-v1'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-v2'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-v3'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Skylake-Client-v4'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-IBRS'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v1'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v2'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='hle'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='rtm'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v3'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v4'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Skylake-Server-v5'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512bw'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512cd'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512dq'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512f'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='avx512vl'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='invpcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pcid'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='pku'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Snowridge'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='core-capability'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='mpx'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='split-lock-detect'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Snowridge-v1'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='core-capability'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='mpx'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='split-lock-detect'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Snowridge-v2'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='core-capability'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='split-lock-detect'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Snowridge-v3'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='core-capability'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='split-lock-detect'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='Snowridge-v4'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='cldemote'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='erms'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='gfni'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='movdir64b'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='movdiri'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='xsaves'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='athlon'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='3dnow'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='3dnowext'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='athlon-v1'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='3dnow'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='3dnowext'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='core2duo'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='core2duo-v1'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='coreduo'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='coreduo-v1'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='n270'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='n270-v1'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='ss'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='phenom'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='3dnow'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='3dnowext'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <blockers model='phenom-v1'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='3dnow'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <feature name='3dnowext'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </blockers>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     </mode>
Dec 09 10:39:18 compute-0 nova_compute[189493]:   </cpu>
Dec 09 10:39:18 compute-0 nova_compute[189493]:   <memoryBacking supported='yes'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <enum name='sourceType'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <value>file</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <value>anonymous</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <value>memfd</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:   </memoryBacking>
Dec 09 10:39:18 compute-0 nova_compute[189493]:   <devices>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <disk supported='yes'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='diskDevice'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>disk</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>cdrom</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>floppy</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>lun</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='bus'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>ide</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>fdc</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>scsi</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>virtio</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>usb</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>sata</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='model'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>virtio</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>virtio-transitional</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>virtio-non-transitional</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     </disk>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <graphics supported='yes'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>vnc</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>egl-headless</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>dbus</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     </graphics>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <video supported='yes'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='modelType'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>vga</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>cirrus</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>virtio</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>none</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>bochs</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>ramfb</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     </video>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <hostdev supported='yes'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='mode'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>subsystem</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='startupPolicy'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>default</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>mandatory</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>requisite</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>optional</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='subsysType'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>usb</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>pci</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>scsi</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='capsType'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='pciBackend'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     </hostdev>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <rng supported='yes'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='model'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>virtio</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>virtio-transitional</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>virtio-non-transitional</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='backendModel'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>random</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>egd</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>builtin</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     </rng>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <filesystem supported='yes'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='driverType'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>path</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>handle</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>virtiofs</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     </filesystem>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <tpm supported='yes'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='model'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>tpm-tis</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>tpm-crb</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='backendModel'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>emulator</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>external</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='backendVersion'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>2.0</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     </tpm>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <redirdev supported='yes'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='bus'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>usb</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     </redirdev>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <channel supported='yes'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>pty</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>unix</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     </channel>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <crypto supported='yes'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='model'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>qemu</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='backendModel'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>builtin</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     </crypto>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <interface supported='yes'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='backendType'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>default</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>passt</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     </interface>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <panic supported='yes'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='model'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>isa</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>hyperv</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     </panic>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <console supported='yes'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='type'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>null</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>vc</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>pty</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>dev</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>file</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>pipe</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>stdio</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>udp</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>tcp</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>unix</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>qemu-vdagent</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>dbus</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     </console>
Dec 09 10:39:18 compute-0 nova_compute[189493]:   </devices>
Dec 09 10:39:18 compute-0 nova_compute[189493]:   <features>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <gic supported='no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <vmcoreinfo supported='yes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <genid supported='yes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <backingStoreInput supported='yes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <backup supported='yes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <async-teardown supported='yes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <ps2 supported='yes'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <sev supported='no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <sgx supported='no'/>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <hyperv supported='yes'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='features'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>relaxed</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>vapic</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>spinlocks</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>vpindex</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>runtime</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>synic</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>stimer</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>reset</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>vendor_id</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>frequencies</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>reenlightenment</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>tlbflush</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>ipi</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>avic</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>emsr_bitmap</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>xmm_input</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <defaults>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <spinlocks>4095</spinlocks>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <stimer_direct>on</stimer_direct>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <tlbflush_direct>on</tlbflush_direct>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <tlbflush_extended>on</tlbflush_extended>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </defaults>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     </hyperv>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     <launchSecurity supported='yes'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       <enum name='sectype'>
Dec 09 10:39:18 compute-0 nova_compute[189493]:         <value>tdx</value>
Dec 09 10:39:18 compute-0 nova_compute[189493]:       </enum>
Dec 09 10:39:18 compute-0 nova_compute[189493]:     </launchSecurity>
Dec 09 10:39:18 compute-0 nova_compute[189493]:   </features>
Dec 09 10:39:18 compute-0 nova_compute[189493]: </domainCapabilities>
Dec 09 10:39:18 compute-0 nova_compute[189493]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.970 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.971 189497 INFO nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Secure Boot support detected
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.973 189497 INFO nova.virt.libvirt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.973 189497 INFO nova.virt.libvirt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:17.981 189497 DEBUG nova.virt.libvirt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:18.007 189497 INFO nova.virt.node [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Determined node identity cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from /var/lib/nova/compute_id
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:18.021 189497 WARNING nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Compute nodes ['cdc1168d-33c9-4d2c-8f23-1b695a68afd0'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:18.047 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:18.063 189497 WARNING nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:18.063 189497 DEBUG oslo_concurrency.lockutils [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:18.064 189497 DEBUG oslo_concurrency.lockutils [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:18.064 189497 DEBUG oslo_concurrency.lockutils [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:18.064 189497 DEBUG nova.compute.resource_tracker [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:18.208 189497 WARNING nova.virt.libvirt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:18.209 189497 DEBUG nova.compute.resource_tracker [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5995MB free_disk=72.40912628173828GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:18.209 189497 DEBUG oslo_concurrency.lockutils [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:18.209 189497 DEBUG oslo_concurrency.lockutils [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:18.245 189497 WARNING nova.compute.resource_tracker [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] No compute node record for compute-0.ctlplane.example.com:cdc1168d-33c9-4d2c-8f23-1b695a68afd0: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host cdc1168d-33c9-4d2c-8f23-1b695a68afd0 could not be found.
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:18.266 189497 INFO nova.compute.resource_tracker [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: cdc1168d-33c9-4d2c-8f23-1b695a68afd0
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:18.327 189497 DEBUG nova.compute.resource_tracker [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:39:18 compute-0 nova_compute[189493]: 2025-12-09 10:39:18.327 189497 DEBUG nova.compute.resource_tracker [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:39:19 compute-0 nova_compute[189493]: 2025-12-09 10:39:19.583 189497 INFO nova.scheduler.client.report [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [req-e6f2114b-7555-4d84-9400-22ea6556edd7] Created resource provider record via placement API for resource provider with UUID cdc1168d-33c9-4d2c-8f23-1b695a68afd0 and name compute-0.ctlplane.example.com.
Dec 09 10:39:20 compute-0 nova_compute[189493]: 2025-12-09 10:39:20.175 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 09 10:39:20 compute-0 nova_compute[189493]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 09 10:39:20 compute-0 nova_compute[189493]: 2025-12-09 10:39:20.176 189497 INFO nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] kernel doesn't support AMD SEV
Dec 09 10:39:20 compute-0 nova_compute[189493]: 2025-12-09 10:39:20.177 189497 DEBUG nova.compute.provider_tree [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 09 10:39:20 compute-0 nova_compute[189493]: 2025-12-09 10:39:20.177 189497 DEBUG nova.virt.libvirt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 09 10:39:20 compute-0 nova_compute[189493]: 2025-12-09 10:39:20.237 189497 DEBUG nova.scheduler.client.report [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Updated inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 09 10:39:20 compute-0 nova_compute[189493]: 2025-12-09 10:39:20.237 189497 DEBUG nova.compute.provider_tree [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Updating resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 09 10:39:20 compute-0 nova_compute[189493]: 2025-12-09 10:39:20.238 189497 DEBUG nova.compute.provider_tree [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 09 10:39:20 compute-0 nova_compute[189493]: 2025-12-09 10:39:20.350 189497 DEBUG nova.compute.provider_tree [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Updating resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 09 10:39:20 compute-0 nova_compute[189493]: 2025-12-09 10:39:20.380 189497 DEBUG nova.compute.resource_tracker [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:39:20 compute-0 nova_compute[189493]: 2025-12-09 10:39:20.380 189497 DEBUG oslo_concurrency.lockutils [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:39:20 compute-0 nova_compute[189493]: 2025-12-09 10:39:20.381 189497 DEBUG nova.service [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 09 10:39:20 compute-0 nova_compute[189493]: 2025-12-09 10:39:20.488 189497 DEBUG nova.service [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 09 10:39:20 compute-0 nova_compute[189493]: 2025-12-09 10:39:20.488 189497 DEBUG nova.servicegroup.drivers.db [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 09 10:39:22 compute-0 sshd-session[189792]: Accepted publickey for zuul from 192.168.122.30 port 36668 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:39:22 compute-0 systemd-logind[806]: New session 26 of user zuul.
Dec 09 10:39:22 compute-0 systemd[1]: Started Session 26 of User zuul.
Dec 09 10:39:22 compute-0 sshd-session[189792]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:39:22 compute-0 podman[189794]: 2025-12-09 10:39:22.39272665 +0000 UTC m=+0.064697879 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:39:23 compute-0 python3.9[189965]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:39:24 compute-0 sudo[190119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzzvbxxykiymuykrtupdwoaawihmrfkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276764.1680107-36-177925092825166/AnsiballZ_systemd_service.py'
Dec 09 10:39:24 compute-0 sudo[190119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:25 compute-0 python3.9[190121]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:39:25 compute-0 systemd[1]: Reloading.
Dec 09 10:39:25 compute-0 systemd-rc-local-generator[190147]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:39:25 compute-0 systemd-sysv-generator[190152]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:39:25 compute-0 sudo[190119]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:26 compute-0 python3.9[190306]: ansible-ansible.builtin.service_facts Invoked
Dec 09 10:39:26 compute-0 network[190323]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 10:39:26 compute-0 network[190324]: 'network-scripts' will be removed from distribution in near future.
Dec 09 10:39:26 compute-0 network[190325]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 10:39:29 compute-0 podman[190403]: 2025-12-09 10:39:29.659867575 +0000 UTC m=+0.099330542 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 09 10:39:31 compute-0 sudo[190623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkieisjmerhvtxnqwfhlyckanikakolf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276770.7694125-55-67097394039396/AnsiballZ_systemd_service.py'
Dec 09 10:39:31 compute-0 sudo[190623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:31 compute-0 python3.9[190625]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:39:31 compute-0 sudo[190623]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:32 compute-0 sudo[190776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gilyrvyhrcvtqwjblezdhfbsyerdxtyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276771.777617-65-228380610847906/AnsiballZ_file.py'
Dec 09 10:39:32 compute-0 sudo[190776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:32 compute-0 python3.9[190778]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:39:32 compute-0 sudo[190776]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:32 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 10:39:32 compute-0 sudo[190929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hacgczdbvvqpwbmkmkiacogeotcxqynr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276772.5698333-73-10387906351859/AnsiballZ_file.py'
Dec 09 10:39:32 compute-0 sudo[190929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:33 compute-0 python3.9[190931]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:39:33 compute-0 sudo[190929]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:33 compute-0 sudo[191081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltkvxtfrsuwlmyqhcpjdclvftcsntiyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276773.2483997-82-3144345814464/AnsiballZ_command.py'
Dec 09 10:39:33 compute-0 sudo[191081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:34 compute-0 python3.9[191083]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:39:34 compute-0 sudo[191081]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:34 compute-0 python3.9[191235]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 09 10:39:35 compute-0 sudo[191385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqlviwwcqjmahoqvaytuudnjuwvuvcgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276775.1540601-100-218329818314041/AnsiballZ_systemd_service.py'
Dec 09 10:39:35 compute-0 sudo[191385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:35 compute-0 python3.9[191387]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:39:35 compute-0 systemd[1]: Reloading.
Dec 09 10:39:35 compute-0 systemd-sysv-generator[191419]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:39:35 compute-0 systemd-rc-local-generator[191416]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:39:36 compute-0 sudo[191385]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:36 compute-0 sshd-session[191388]: Connection closed by authenticating user daemon 159.223.8.217 port 42708 [preauth]
Dec 09 10:39:36 compute-0 sudo[191573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlbepegwlnbfyapejpqrgusvkseonomt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276776.206454-108-171460640507935/AnsiballZ_command.py'
Dec 09 10:39:36 compute-0 sudo[191573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:36 compute-0 python3.9[191575]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:39:36 compute-0 sudo[191573]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:37 compute-0 sudo[191726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgllyetasxmzuntfvtzdyvgwurmgbleh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276776.8783274-117-170524186697440/AnsiballZ_file.py'
Dec 09 10:39:37 compute-0 sudo[191726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:37 compute-0 python3.9[191728]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:39:37 compute-0 sudo[191726]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:38 compute-0 python3.9[191878]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:39:38 compute-0 python3.9[192030]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:39:39 compute-0 podman[192125]: 2025-12-09 10:39:39.601715049 +0000 UTC m=+0.084291434 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 09 10:39:39 compute-0 python3.9[192162]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276778.4503536-133-140535091597249/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:39:40 compute-0 sudo[192321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwopnrrtiynfgpwixsalxzcifqzjltxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276779.939661-148-267622147068788/AnsiballZ_group.py'
Dec 09 10:39:40 compute-0 sudo[192321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:40 compute-0 python3.9[192323]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec 09 10:39:40 compute-0 sudo[192321]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:41 compute-0 sudo[192473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iapbnqoecpgwxfgyrmhowlogkyhwrusp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276780.9574966-159-177019398638283/AnsiballZ_getent.py'
Dec 09 10:39:41 compute-0 sudo[192473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:41 compute-0 python3.9[192475]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec 09 10:39:41 compute-0 sudo[192473]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:42 compute-0 sudo[192626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rftfvkdgzggqfduiaqzmgpehzfnsmayl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276781.7603307-167-50380776129177/AnsiballZ_group.py'
Dec 09 10:39:42 compute-0 sudo[192626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:42 compute-0 python3.9[192628]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 09 10:39:42 compute-0 groupadd[192629]: group added to /etc/group: name=ceilometer, GID=42405
Dec 09 10:39:42 compute-0 groupadd[192629]: group added to /etc/gshadow: name=ceilometer
Dec 09 10:39:42 compute-0 groupadd[192629]: new group: name=ceilometer, GID=42405
Dec 09 10:39:42 compute-0 sudo[192626]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:43 compute-0 sudo[192784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zprajudcjblxvvsenbcyzynakwgxetnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276782.5605223-175-134431317149153/AnsiballZ_user.py'
Dec 09 10:39:43 compute-0 sudo[192784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:39:43 compute-0 python3.9[192786]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 09 10:39:43 compute-0 useradd[192788]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Dec 09 10:39:43 compute-0 useradd[192788]: add 'ceilometer' to group 'libvirt'
Dec 09 10:39:43 compute-0 useradd[192788]: add 'ceilometer' to shadow group 'libvirt'
Dec 09 10:39:43 compute-0 sudo[192784]: pam_unix(sudo:session): session closed for user root
Dec 09 10:39:44 compute-0 python3.9[192944]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:39:45 compute-0 python3.9[193065]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765276784.0966673-201-230735722321652/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:39:45 compute-0 python3.9[193215]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:39:46 compute-0 python3.9[193336]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765276785.2517629-201-138191552085246/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:39:46 compute-0 python3.9[193486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:39:47 compute-0 python3.9[193607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765276786.483762-201-41019246652058/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:39:48 compute-0 python3.9[193757]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:39:48 compute-0 nova_compute[189493]: 2025-12-09 10:39:48.491 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:39:48 compute-0 nova_compute[189493]: 2025-12-09 10:39:48.531 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:39:49 compute-0 python3.9[193909]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:39:49 compute-0 python3.9[194061]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:39:50 compute-0 python3.9[194182]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276789.3812351-260-76771144190729/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:39:51 compute-0 python3.9[194332]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:39:51 compute-0 python3.9[194408]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:39:52 compute-0 python3.9[194558]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:39:52 compute-0 podman[194653]: 2025-12-09 10:39:52.763940296 +0000 UTC m=+0.091099215 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 09 10:39:52 compute-0 python3.9[194686]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276791.6609843-260-127616818774162/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:39:53 compute-0 python3.9[194849]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:39:53 compute-0 python3.9[194970]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276793.0252616-260-59262484753373/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:39:54 compute-0 python3.9[195120]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:39:55 compute-0 python3.9[195241]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276794.1054826-260-19412778976771/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:39:55 compute-0 python3.9[195391]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:39:56 compute-0 python3.9[195512]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276795.271495-260-192684799483741/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:39:57 compute-0 python3.9[195662]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:39:57 compute-0 python3.9[195783]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276796.6836395-260-217485946628743/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:39:58 compute-0 python3.9[195933]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:39:58 compute-0 python3.9[196054]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276797.8660398-260-136152620593911/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:39:59 compute-0 python3.9[196204]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:39:59 compute-0 podman[196299]: 2025-12-09 10:39:59.867729915 +0000 UTC m=+0.102644105 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Dec 09 10:39:59 compute-0 python3.9[196335]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276799.016886-260-275075943306042/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:40:00 compute-0 python3.9[196502]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:40:01 compute-0 python3.9[196623]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276800.1452332-260-54191626509074/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:40:01 compute-0 python3.9[196773]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:40:02 compute-0 python3.9[196894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276801.3243465-260-70705576530291/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:40:03 compute-0 python3.9[197044]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:40:03 compute-0 python3.9[197120]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:40:04 compute-0 python3.9[197270]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:40:04 compute-0 python3.9[197346]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:40:05 compute-0 python3.9[197496]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:40:05 compute-0 python3.9[197572]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:40:06 compute-0 sshd-session[197573]: Connection closed by authenticating user daemon 159.223.8.217 port 49106 [preauth]
Dec 09 10:40:06 compute-0 sudo[197724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsqsqkjtctczwrmddzdjzuwaxrdowbkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276806.0749438-449-199716465454107/AnsiballZ_file.py'
Dec 09 10:40:06 compute-0 sudo[197724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:06 compute-0 python3.9[197726]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:40:06 compute-0 sudo[197724]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:07 compute-0 sudo[197876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfsavainnpjvedstgmkjjfretfeaaiim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276806.734305-457-71705704892930/AnsiballZ_file.py'
Dec 09 10:40:07 compute-0 sudo[197876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:07 compute-0 python3.9[197878]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:40:07 compute-0 sudo[197876]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:07 compute-0 sudo[198028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnkoimnokrbbnybexvetikmvhtqzqohd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276807.4554698-465-91682646222472/AnsiballZ_file.py'
Dec 09 10:40:07 compute-0 sudo[198028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:07 compute-0 python3.9[198030]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:40:08 compute-0 sudo[198028]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:08 compute-0 sudo[198180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qheluqopirpiowskmwxfatffnzaskgcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276808.1771338-473-113503072223277/AnsiballZ_systemd_service.py'
Dec 09 10:40:08 compute-0 sudo[198180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:08 compute-0 python3.9[198182]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:40:08 compute-0 systemd[1]: Reloading.
Dec 09 10:40:08 compute-0 systemd-rc-local-generator[198206]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:40:08 compute-0 systemd-sysv-generator[198210]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:40:09 compute-0 systemd[1]: Listening on Podman API Socket.
Dec 09 10:40:09 compute-0 sudo[198180]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:09 compute-0 sudo[198378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlnrnavunlryayjevzqiuywzqohegrhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276809.531041-482-177504918846861/AnsiballZ_stat.py'
Dec 09 10:40:09 compute-0 sudo[198378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:09 compute-0 podman[198344]: 2025-12-09 10:40:09.876583096 +0000 UTC m=+0.070963787 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:40:10 compute-0 python3.9[198386]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:40:10 compute-0 sudo[198378]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:10 compute-0 sudo[198510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psocewyydyhktjnpqxickiyfrtudntzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276809.531041-482-177504918846861/AnsiballZ_copy.py'
Dec 09 10:40:10 compute-0 sudo[198510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:10 compute-0 python3.9[198512]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276809.531041-482-177504918846861/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:40:10 compute-0 sudo[198510]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:10 compute-0 sudo[198586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vumwqbopwhnjrcngoinjsgbpsvndscve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276809.531041-482-177504918846861/AnsiballZ_stat.py'
Dec 09 10:40:10 compute-0 sudo[198586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:11 compute-0 python3.9[198588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:40:11 compute-0 sudo[198586]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:11 compute-0 sudo[198709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvkrgwldiokutoektrwtbtncchckttlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276809.531041-482-177504918846861/AnsiballZ_copy.py'
Dec 09 10:40:11 compute-0 sudo[198709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:11 compute-0 python3.9[198711]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276809.531041-482-177504918846861/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:40:11 compute-0 sudo[198709]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:12 compute-0 sudo[198861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pccgblarfvxqtfjqfwmzapvulafkezqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276811.8796582-510-55252035716286/AnsiballZ_container_config_data.py'
Dec 09 10:40:12 compute-0 sudo[198861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:12 compute-0 python3.9[198863]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec 09 10:40:12 compute-0 sudo[198861]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:13 compute-0 sudo[199013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqxxctxrmugfgizlettasnedzpvowmra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276812.8732486-519-243025467259357/AnsiballZ_container_config_hash.py'
Dec 09 10:40:13 compute-0 sudo[199013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:13 compute-0 python3.9[199015]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 09 10:40:13 compute-0 sudo[199013]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:14 compute-0 sudo[199165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urrzvbixxyqexhdxajkhrunlfdcecmdl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765276813.9244516-529-215467948556950/AnsiballZ_edpm_container_manage.py'
Dec 09 10:40:14 compute-0 sudo[199165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:14 compute-0 python3[199167]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 09 10:40:15 compute-0 podman[199202]: 2025-12-09 10:40:15.006078851 +0000 UTC m=+0.076065706 container create b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 09 10:40:15 compute-0 podman[199202]: 2025-12-09 10:40:14.957153039 +0000 UTC m=+0.027139924 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec 09 10:40:15 compute-0 python3[199167]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Dec 09 10:40:15 compute-0 sudo[199165]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:15 compute-0 sudo[199389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anjtsxltuueajiibhoegyyqumqyrolsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276815.3504624-537-19955787231394/AnsiballZ_stat.py'
Dec 09 10:40:15 compute-0 sudo[199389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:15 compute-0 python3.9[199391]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:40:15 compute-0 sudo[199389]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:16 compute-0 sudo[199543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqvrfyjiiockhkwqwvezdvrcuglwqrwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276816.2037356-546-276325116954012/AnsiballZ_file.py'
Dec 09 10:40:16 compute-0 sudo[199543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:16 compute-0 python3.9[199545]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:40:16 compute-0 sudo[199543]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:16 compute-0 nova_compute[189493]: 2025-12-09 10:40:16.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:40:16 compute-0 nova_compute[189493]: 2025-12-09 10:40:16.845 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:40:16 compute-0 nova_compute[189493]: 2025-12-09 10:40:16.845 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:40:16 compute-0 nova_compute[189493]: 2025-12-09 10:40:16.845 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 10:40:16 compute-0 nova_compute[189493]: 2025-12-09 10:40:16.869 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 10:40:16 compute-0 nova_compute[189493]: 2025-12-09 10:40:16.869 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:40:16 compute-0 nova_compute[189493]: 2025-12-09 10:40:16.869 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:40:16 compute-0 nova_compute[189493]: 2025-12-09 10:40:16.869 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:40:16 compute-0 nova_compute[189493]: 2025-12-09 10:40:16.870 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:40:16 compute-0 nova_compute[189493]: 2025-12-09 10:40:16.870 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:40:16 compute-0 nova_compute[189493]: 2025-12-09 10:40:16.870 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:40:16 compute-0 nova_compute[189493]: 2025-12-09 10:40:16.870 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:40:16 compute-0 nova_compute[189493]: 2025-12-09 10:40:16.870 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:40:16 compute-0 nova_compute[189493]: 2025-12-09 10:40:16.904 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:40:16 compute-0 nova_compute[189493]: 2025-12-09 10:40:16.904 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:40:16 compute-0 nova_compute[189493]: 2025-12-09 10:40:16.905 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:40:16 compute-0 nova_compute[189493]: 2025-12-09 10:40:16.905 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:40:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:40:16.967 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:40:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:40:16.967 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:40:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:40:16.967 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:40:17 compute-0 nova_compute[189493]: 2025-12-09 10:40:17.072 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:40:17 compute-0 nova_compute[189493]: 2025-12-09 10:40:17.073 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5980MB free_disk=72.40877532958984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:40:17 compute-0 nova_compute[189493]: 2025-12-09 10:40:17.073 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:40:17 compute-0 nova_compute[189493]: 2025-12-09 10:40:17.074 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:40:17 compute-0 nova_compute[189493]: 2025-12-09 10:40:17.171 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:40:17 compute-0 nova_compute[189493]: 2025-12-09 10:40:17.171 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:40:17 compute-0 nova_compute[189493]: 2025-12-09 10:40:17.202 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:40:17 compute-0 nova_compute[189493]: 2025-12-09 10:40:17.219 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:40:17 compute-0 nova_compute[189493]: 2025-12-09 10:40:17.221 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:40:17 compute-0 nova_compute[189493]: 2025-12-09 10:40:17.221 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:40:17 compute-0 sudo[199694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvjyjjpixznglbnmtfwycbehymdszfqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276816.7640862-546-120083506692431/AnsiballZ_copy.py'
Dec 09 10:40:17 compute-0 sudo[199694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:17 compute-0 python3.9[199696]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276816.7640862-546-120083506692431/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:40:17 compute-0 sudo[199694]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:18 compute-0 sudo[199770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwzoaxkueeqetkcmzoijzizkkssxfvnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276816.7640862-546-120083506692431/AnsiballZ_systemd.py'
Dec 09 10:40:18 compute-0 sudo[199770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:18 compute-0 python3.9[199772]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:40:18 compute-0 systemd[1]: Reloading.
Dec 09 10:40:18 compute-0 systemd-sysv-generator[199799]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:40:18 compute-0 systemd-rc-local-generator[199795]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:40:18 compute-0 sudo[199770]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:19 compute-0 sudo[199881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfwmpfeheyotpcmwcrekyixvvvewtcnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276816.7640862-546-120083506692431/AnsiballZ_systemd.py'
Dec 09 10:40:19 compute-0 sudo[199881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:19 compute-0 python3.9[199883]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:40:19 compute-0 systemd[1]: Reloading.
Dec 09 10:40:19 compute-0 systemd-rc-local-generator[199915]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:40:19 compute-0 systemd-sysv-generator[199919]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:40:19 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Dec 09 10:40:20 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:20 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.
Dec 09 10:40:20 compute-0 podman[199923]: 2025-12-09 10:40:20.090749067 +0000 UTC m=+0.174331971 container init b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: + sudo -E kolla_set_configs
Dec 09 10:40:20 compute-0 sudo[199944]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: sudo: unable to send audit message: Operation not permitted
Dec 09 10:40:20 compute-0 podman[199923]: 2025-12-09 10:40:20.126407915 +0000 UTC m=+0.209990699 container start b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 09 10:40:20 compute-0 sudo[199944]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 09 10:40:20 compute-0 podman[199923]: ceilometer_agent_compute
Dec 09 10:40:20 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Dec 09 10:40:20 compute-0 sudo[199881]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: INFO:__main__:Validating config file
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: INFO:__main__:Copying service configuration files
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: INFO:__main__:Writing out command to execute
Dec 09 10:40:20 compute-0 podman[199945]: 2025-12-09 10:40:20.184056957 +0000 UTC m=+0.046329478 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 09 10:40:20 compute-0 sudo[199944]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:20 compute-0 systemd[1]: b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614-7bee615facdddd8.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 10:40:20 compute-0 systemd[1]: b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614-7bee615facdddd8.service: Failed with result 'exit-code'.
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: ++ cat /run_command
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: + ARGS=
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: + sudo kolla_copy_cacerts
Dec 09 10:40:20 compute-0 sudo[199968]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: sudo: unable to send audit message: Operation not permitted
Dec 09 10:40:20 compute-0 sudo[199968]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 09 10:40:20 compute-0 sudo[199968]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: + [[ ! -n '' ]]
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: + . kolla_extend_start
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: + umask 0022
Dec 09 10:40:20 compute-0 ceilometer_agent_compute[199938]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 09 10:40:20 compute-0 sudo[200120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akewzifjvuzmljlgdmsialdbnqgnwkue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276820.3437173-570-189897917951078/AnsiballZ_systemd.py'
Dec 09 10:40:20 compute-0 sudo[200120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:20 compute-0 python3.9[200122]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:40:21 compute-0 systemd[1]: Stopping ceilometer_agent_compute container...
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.122 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.122 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.135 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.155 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.156 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.156 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.156 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.156 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.169 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.171 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.171 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.263 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.265 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.265 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.382 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.391 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.392 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.392 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.526 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.528 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.528 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.528 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.528 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.528 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.528 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.528 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.528 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.540 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.540 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.540 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.540 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.540 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Dec 09 10:40:21 compute-0 virtqemud[189118]: End of file while reading data: Input/output error
Dec 09 10:40:21 compute-0 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.549 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Dec 09 10:40:21 compute-0 systemd[1]: libpod-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope: Deactivated successfully.
Dec 09 10:40:21 compute-0 systemd[1]: libpod-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope: Consumed 1.669s CPU time.
Dec 09 10:40:21 compute-0 podman[200126]: 2025-12-09 10:40:21.761404966 +0000 UTC m=+0.736886137 container died b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125)
Dec 09 10:40:21 compute-0 systemd[1]: b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614-7bee615facdddd8.timer: Deactivated successfully.
Dec 09 10:40:21 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.
Dec 09 10:40:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614-userdata-shm.mount: Deactivated successfully.
Dec 09 10:40:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05-merged.mount: Deactivated successfully.
Dec 09 10:40:21 compute-0 podman[200126]: 2025-12-09 10:40:21.872855143 +0000 UTC m=+0.848336284 container cleanup b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 09 10:40:21 compute-0 podman[200126]: ceilometer_agent_compute
Dec 09 10:40:21 compute-0 podman[200168]: ceilometer_agent_compute
Dec 09 10:40:21 compute-0 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec 09 10:40:21 compute-0 systemd[1]: Stopped ceilometer_agent_compute container.
Dec 09 10:40:21 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Dec 09 10:40:22 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:22 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.
Dec 09 10:40:22 compute-0 podman[200182]: 2025-12-09 10:40:22.103652554 +0000 UTC m=+0.123166023 container init b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: + sudo -E kolla_set_configs
Dec 09 10:40:22 compute-0 sudo[200203]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: sudo: unable to send audit message: Operation not permitted
Dec 09 10:40:22 compute-0 sudo[200203]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 09 10:40:22 compute-0 podman[200182]: 2025-12-09 10:40:22.136066359 +0000 UTC m=+0.155579828 container start b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec 09 10:40:22 compute-0 podman[200182]: ceilometer_agent_compute
Dec 09 10:40:22 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Dec 09 10:40:22 compute-0 sudo[200120]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: INFO:__main__:Validating config file
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: INFO:__main__:Copying service configuration files
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: INFO:__main__:Writing out command to execute
Dec 09 10:40:22 compute-0 sudo[200203]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: ++ cat /run_command
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: + ARGS=
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: + sudo kolla_copy_cacerts
Dec 09 10:40:22 compute-0 podman[200204]: 2025-12-09 10:40:22.202054881 +0000 UTC m=+0.054605655 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec 09 10:40:22 compute-0 systemd[1]: b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614-4dddc5eb05d7029b.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 10:40:22 compute-0 systemd[1]: b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614-4dddc5eb05d7029b.service: Failed with result 'exit-code'.
Dec 09 10:40:22 compute-0 sudo[200226]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: sudo: unable to send audit message: Operation not permitted
Dec 09 10:40:22 compute-0 sudo[200226]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 09 10:40:22 compute-0 sudo[200226]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: + [[ ! -n '' ]]
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: + . kolla_extend_start
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: + umask 0022
Dec 09 10:40:22 compute-0 ceilometer_agent_compute[200197]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 09 10:40:22 compute-0 auditd[700]: Audit daemon rotating log files
Dec 09 10:40:22 compute-0 sudo[200378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wautaloahdqyojlwbnhlodxpkcrksurv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276822.3521905-578-248556456944523/AnsiballZ_stat.py'
Dec 09 10:40:22 compute-0 sudo[200378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:22 compute-0 python3.9[200380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:40:22 compute-0 sudo[200378]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:22 compute-0 podman[200381]: 2025-12-09 10:40:22.892003675 +0000 UTC m=+0.056288149 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.065 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.065 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.065 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.065 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.077 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.077 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.077 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.077 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.077 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.077 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.077 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.077 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.097 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.097 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.111 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.111 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.111 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.111 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.111 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.111 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.111 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.111 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.113 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.116 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.116 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.118 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.124 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.125 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.125 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.245 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.245 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.245 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.245 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.245 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.245 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.262 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.284 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.284 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.286 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.290 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.294 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:40:23 compute-0 sudo[200529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toatlcewpkzsgdamwjvkycspwoukjxik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276822.3521905-578-248556456944523/AnsiballZ_copy.py'
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 sudo[200529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:40:23 compute-0 python3.9[200534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276822.3521905-578-248556456944523/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:40:23 compute-0 sudo[200529]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:24 compute-0 sudo[200684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhoiupjegzqdbocrkvquljhggnjwklqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276823.8382003-595-103057096062675/AnsiballZ_container_config_data.py'
Dec 09 10:40:24 compute-0 sudo[200684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:24 compute-0 python3.9[200686]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec 09 10:40:24 compute-0 sudo[200684]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:25 compute-0 sudo[200836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmwhughhmrvyzzubglpfgmzujwgilnpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276824.784275-604-266767206452488/AnsiballZ_container_config_hash.py'
Dec 09 10:40:25 compute-0 sudo[200836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:25 compute-0 python3.9[200838]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 09 10:40:25 compute-0 sudo[200836]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:25 compute-0 sudo[200988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfpnumerfyaqdzgunmjcprgxudorkswr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765276825.5994399-614-55831887092459/AnsiballZ_edpm_container_manage.py'
Dec 09 10:40:25 compute-0 sudo[200988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:26 compute-0 python3[200990]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 09 10:40:26 compute-0 podman[201027]: 2025-12-09 10:40:26.389049133 +0000 UTC m=+0.045128307 container create d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 10:40:26 compute-0 podman[201027]: 2025-12-09 10:40:26.364827883 +0000 UTC m=+0.020907077 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec 09 10:40:26 compute-0 python3[200990]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec 09 10:40:26 compute-0 sudo[200988]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:27 compute-0 sudo[201215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-robjtalaqdbtcavueydkpxutfilcexul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276826.7375195-622-171488110232904/AnsiballZ_stat.py'
Dec 09 10:40:27 compute-0 sudo[201215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:27 compute-0 python3.9[201217]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:40:27 compute-0 sudo[201215]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:27 compute-0 sudo[201369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szgejfdppenuldtctclspumeghyiefkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276827.5169232-631-41373486880321/AnsiballZ_file.py'
Dec 09 10:40:27 compute-0 sudo[201369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:28 compute-0 python3.9[201371]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:40:28 compute-0 sudo[201369]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:28 compute-0 sudo[201520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoriahayltmygenbvazmklqzsavyjbym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276828.1435943-631-264230454308935/AnsiballZ_copy.py'
Dec 09 10:40:28 compute-0 sudo[201520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:28 compute-0 python3.9[201522]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276828.1435943-631-264230454308935/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:40:28 compute-0 sudo[201520]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:29 compute-0 sudo[201596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aprlnszzoqxsqlifwuwvhrlrlmgtsyto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276828.1435943-631-264230454308935/AnsiballZ_systemd.py'
Dec 09 10:40:29 compute-0 sudo[201596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:29 compute-0 python3.9[201598]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:40:29 compute-0 systemd[1]: Reloading.
Dec 09 10:40:29 compute-0 systemd-rc-local-generator[201623]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:40:29 compute-0 systemd-sysv-generator[201627]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:40:29 compute-0 sudo[201596]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:29 compute-0 sudo[201720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avnrssubprdgpjddmghimnfnsqdsvpqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276828.1435943-631-264230454308935/AnsiballZ_systemd.py'
Dec 09 10:40:29 compute-0 sudo[201720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:30 compute-0 podman[201681]: 2025-12-09 10:40:30.028853667 +0000 UTC m=+0.115117290 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Dec 09 10:40:30 compute-0 python3.9[201728]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:40:30 compute-0 systemd[1]: Reloading.
Dec 09 10:40:30 compute-0 systemd-rc-local-generator[201760]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:40:30 compute-0 systemd-sysv-generator[201766]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:40:30 compute-0 systemd[1]: Starting node_exporter container...
Dec 09 10:40:30 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b767b8e46db6891969d0253ca8679fa58a6854abaedf603b724fbeb9ba1179a6/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b767b8e46db6891969d0253ca8679fa58a6854abaedf603b724fbeb9ba1179a6/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:30 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.
Dec 09 10:40:30 compute-0 podman[201774]: 2025-12-09 10:40:30.771964279 +0000 UTC m=+0.123858230 container init d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.787Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.787Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.787Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.787Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.787Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=arp
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=bcache
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=bonding
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=cpu
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=edac
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=filefd
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=netclass
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=netdev
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=netstat
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=nfs
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=nvme
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=softnet
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=systemd
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=xfs
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=zfs
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 09 10:40:30 compute-0 node_exporter[201789]: ts=2025-12-09T10:40:30.790Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec 09 10:40:30 compute-0 podman[201774]: 2025-12-09 10:40:30.806435077 +0000 UTC m=+0.158328978 container start d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 10:40:30 compute-0 podman[201774]: node_exporter
Dec 09 10:40:30 compute-0 systemd[1]: Started node_exporter container.
Dec 09 10:40:30 compute-0 sudo[201720]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:30 compute-0 podman[201798]: 2025-12-09 10:40:30.871597358 +0000 UTC m=+0.055065298 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 10:40:31 compute-0 sudo[201971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhmmkfthzxrltjuaydzwbnaoeenmgurc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276831.0150244-655-99916493451123/AnsiballZ_systemd.py'
Dec 09 10:40:31 compute-0 sudo[201971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:31 compute-0 python3.9[201973]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:40:31 compute-0 systemd[1]: Stopping node_exporter container...
Dec 09 10:40:31 compute-0 systemd[1]: libpod-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope: Deactivated successfully.
Dec 09 10:40:31 compute-0 podman[201977]: 2025-12-09 10:40:31.699517215 +0000 UTC m=+0.048018830 container died d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 10:40:31 compute-0 systemd[1]: d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b-292088ee529aa019.timer: Deactivated successfully.
Dec 09 10:40:31 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.
Dec 09 10:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b-userdata-shm.mount: Deactivated successfully.
Dec 09 10:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-b767b8e46db6891969d0253ca8679fa58a6854abaedf603b724fbeb9ba1179a6-merged.mount: Deactivated successfully.
Dec 09 10:40:31 compute-0 podman[201977]: 2025-12-09 10:40:31.743489193 +0000 UTC m=+0.091990768 container cleanup d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 10:40:31 compute-0 podman[201977]: node_exporter
Dec 09 10:40:31 compute-0 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 09 10:40:31 compute-0 podman[202006]: node_exporter
Dec 09 10:40:31 compute-0 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec 09 10:40:31 compute-0 systemd[1]: Stopped node_exporter container.
Dec 09 10:40:31 compute-0 systemd[1]: Starting node_exporter container...
Dec 09 10:40:31 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b767b8e46db6891969d0253ca8679fa58a6854abaedf603b724fbeb9ba1179a6/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b767b8e46db6891969d0253ca8679fa58a6854abaedf603b724fbeb9ba1179a6/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:31 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.
Dec 09 10:40:31 compute-0 podman[202019]: 2025-12-09 10:40:31.951552552 +0000 UTC m=+0.107647931 container init d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.965Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.965Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.965Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.966Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.966Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.966Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.966Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=arp
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=bcache
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=bonding
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=cpu
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=edac
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=filefd
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=netclass
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=netdev
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=netstat
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=nfs
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=nvme
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=softnet
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=systemd
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=xfs
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=zfs
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.968Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 09 10:40:31 compute-0 node_exporter[202035]: ts=2025-12-09T10:40:31.968Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec 09 10:40:31 compute-0 podman[202019]: 2025-12-09 10:40:31.976208683 +0000 UTC m=+0.132304032 container start d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 10:40:31 compute-0 podman[202019]: node_exporter
Dec 09 10:40:31 compute-0 systemd[1]: Started node_exporter container.
Dec 09 10:40:32 compute-0 sudo[201971]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:32 compute-0 podman[202044]: 2025-12-09 10:40:32.101412786 +0000 UTC m=+0.115752526 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 10:40:32 compute-0 sudo[202220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puaxducmwvenihcudeeppeavnmajrxhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276832.200274-663-139541723923369/AnsiballZ_stat.py'
Dec 09 10:40:32 compute-0 sudo[202220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:32 compute-0 python3.9[202222]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:40:32 compute-0 sudo[202220]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:33 compute-0 sudo[202343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udjculrqirwpgybskdgpmxgveqehcguk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276832.200274-663-139541723923369/AnsiballZ_copy.py'
Dec 09 10:40:33 compute-0 sudo[202343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:33 compute-0 python3.9[202345]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276832.200274-663-139541723923369/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:40:33 compute-0 sudo[202343]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:33 compute-0 sudo[202495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulwahirdynvksvjcmhcppgsvphdsrbrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276833.6898854-680-7676578973196/AnsiballZ_container_config_data.py'
Dec 09 10:40:33 compute-0 sudo[202495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:34 compute-0 python3.9[202497]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec 09 10:40:34 compute-0 sudo[202495]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:34 compute-0 sudo[202647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iemmcqncscbggnnpbhygstqmsmxbjozr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276834.4493988-689-14268553575962/AnsiballZ_container_config_hash.py'
Dec 09 10:40:34 compute-0 sudo[202647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:34 compute-0 python3.9[202649]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 09 10:40:34 compute-0 sudo[202647]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:35 compute-0 sshd-session[202650]: Connection closed by authenticating user daemon 159.223.8.217 port 36230 [preauth]
Dec 09 10:40:35 compute-0 sudo[202801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oybgjpepmmczcqhxpwrtpwcjcwsogxpl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765276835.2540276-699-150945912071482/AnsiballZ_edpm_container_manage.py'
Dec 09 10:40:35 compute-0 sudo[202801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:35 compute-0 python3[202803]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 09 10:40:37 compute-0 podman[202817]: 2025-12-09 10:40:37.419933361 +0000 UTC m=+1.456480121 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec 09 10:40:37 compute-0 podman[202917]: 2025-12-09 10:40:37.653282359 +0000 UTC m=+0.113007786 container create 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible)
Dec 09 10:40:37 compute-0 podman[202917]: 2025-12-09 10:40:37.582840438 +0000 UTC m=+0.042565905 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec 09 10:40:37 compute-0 python3[202803]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Dec 09 10:40:37 compute-0 sudo[202801]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:38 compute-0 sudo[203100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeklroidhjgcpzfkpspcaheeiejduxrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276838.053913-707-63472370937143/AnsiballZ_stat.py'
Dec 09 10:40:38 compute-0 sudo[203100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:38 compute-0 python3.9[203102]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:40:38 compute-0 sudo[203100]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:39 compute-0 sudo[203254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azpefvpjpkduqptffkbjhurrmcitemmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276838.7761834-716-163023309801749/AnsiballZ_file.py'
Dec 09 10:40:39 compute-0 sudo[203254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:39 compute-0 python3.9[203256]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:40:39 compute-0 sudo[203254]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:39 compute-0 sudo[203405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aufasibuedntdugwrniowjgnnzftnxos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276839.339333-716-134629698888103/AnsiballZ_copy.py'
Dec 09 10:40:39 compute-0 sudo[203405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:40 compute-0 python3.9[203407]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276839.339333-716-134629698888103/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:40:40 compute-0 sudo[203405]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:40 compute-0 sudo[203497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqxdzhwvbbluzmkxissprlmqlhjzkljh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276839.339333-716-134629698888103/AnsiballZ_systemd.py'
Dec 09 10:40:40 compute-0 sudo[203497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:40 compute-0 podman[203455]: 2025-12-09 10:40:40.320726962 +0000 UTC m=+0.068662693 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd)
Dec 09 10:40:40 compute-0 python3.9[203503]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:40:40 compute-0 systemd[1]: Reloading.
Dec 09 10:40:40 compute-0 systemd-rc-local-generator[203525]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:40:40 compute-0 systemd-sysv-generator[203528]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:40:40 compute-0 sudo[203497]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:41 compute-0 sudo[203612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glldzrmwujhklexbqycgqjmdbkszuhmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276839.339333-716-134629698888103/AnsiballZ_systemd.py'
Dec 09 10:40:41 compute-0 sudo[203612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:41 compute-0 python3.9[203614]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:40:41 compute-0 systemd[1]: Reloading.
Dec 09 10:40:41 compute-0 systemd-sysv-generator[203651]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:40:41 compute-0 systemd-rc-local-generator[203648]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:40:41 compute-0 systemd[1]: Starting podman_exporter container...
Dec 09 10:40:42 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab814706a1562d5947476d9c9c0756fbac53d9e81fc06504309d792bac01d5d/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab814706a1562d5947476d9c9c0756fbac53d9e81fc06504309d792bac01d5d/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:42 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.
Dec 09 10:40:42 compute-0 podman[203655]: 2025-12-09 10:40:42.093854817 +0000 UTC m=+0.181273646 container init 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 10:40:42 compute-0 podman[203655]: 2025-12-09 10:40:42.129743381 +0000 UTC m=+0.217162190 container start 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 09 10:40:42 compute-0 podman[203655]: podman_exporter
Dec 09 10:40:42 compute-0 systemd[1]: Started podman_exporter container.
Dec 09 10:40:42 compute-0 podman_exporter[203671]: ts=2025-12-09T10:40:42.150Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 09 10:40:42 compute-0 podman_exporter[203671]: ts=2025-12-09T10:40:42.150Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 09 10:40:42 compute-0 podman_exporter[203671]: ts=2025-12-09T10:40:42.151Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 09 10:40:42 compute-0 podman_exporter[203671]: ts=2025-12-09T10:40:42.151Z caller=handler.go:105 level=info collector=container
Dec 09 10:40:42 compute-0 systemd[1]: Starting Podman API Service...
Dec 09 10:40:42 compute-0 systemd[1]: Started Podman API Service.
Dec 09 10:40:42 compute-0 podman[203687]: time="2025-12-09T10:40:42Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec 09 10:40:42 compute-0 podman[203687]: time="2025-12-09T10:40:42Z" level=info msg="Setting parallel job count to 25"
Dec 09 10:40:42 compute-0 podman[203687]: time="2025-12-09T10:40:42Z" level=info msg="Using sqlite as database backend"
Dec 09 10:40:42 compute-0 podman[203687]: time="2025-12-09T10:40:42Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec 09 10:40:42 compute-0 podman[203687]: time="2025-12-09T10:40:42Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec 09 10:40:42 compute-0 podman[203687]: time="2025-12-09T10:40:42Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Dec 09 10:40:42 compute-0 sudo[203612]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:42 compute-0 podman[203687]: @ - - [09/Dec/2025:10:40:42 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 09 10:40:42 compute-0 podman[203687]: time="2025-12-09T10:40:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:40:42 compute-0 podman[203676]: 2025-12-09 10:40:42.228840657 +0000 UTC m=+0.087448272 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 09 10:40:42 compute-0 podman[203687]: @ - - [09/Dec/2025:10:40:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19587 "" "Go-http-client/1.1"
Dec 09 10:40:42 compute-0 podman_exporter[203671]: ts=2025-12-09T10:40:42.233Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec 09 10:40:42 compute-0 podman_exporter[203671]: ts=2025-12-09T10:40:42.234Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec 09 10:40:42 compute-0 podman_exporter[203671]: ts=2025-12-09T10:40:42.235Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec 09 10:40:42 compute-0 systemd[1]: 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9-71f17b6f5c3a2799.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 10:40:42 compute-0 systemd[1]: 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9-71f17b6f5c3a2799.service: Failed with result 'exit-code'.
Dec 09 10:40:43 compute-0 sudo[203865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cabniermvpjxokmbnrhnrkateuvmzsqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276842.3932204-740-104172127088311/AnsiballZ_systemd.py'
Dec 09 10:40:43 compute-0 sudo[203865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:43 compute-0 python3.9[203867]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:40:43 compute-0 systemd[1]: Stopping podman_exporter container...
Dec 09 10:40:43 compute-0 podman[203687]: @ - - [09/Dec/2025:10:40:42 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Dec 09 10:40:43 compute-0 systemd[1]: libpod-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope: Deactivated successfully.
Dec 09 10:40:43 compute-0 podman[203871]: 2025-12-09 10:40:43.733147364 +0000 UTC m=+0.065924869 container died 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 10:40:43 compute-0 systemd[1]: 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9-71f17b6f5c3a2799.timer: Deactivated successfully.
Dec 09 10:40:43 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.
Dec 09 10:40:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9-userdata-shm.mount: Deactivated successfully.
Dec 09 10:40:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ab814706a1562d5947476d9c9c0756fbac53d9e81fc06504309d792bac01d5d-merged.mount: Deactivated successfully.
Dec 09 10:40:44 compute-0 podman[203871]: 2025-12-09 10:40:44.051410633 +0000 UTC m=+0.384188108 container cleanup 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 09 10:40:44 compute-0 podman[203871]: podman_exporter
Dec 09 10:40:44 compute-0 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 09 10:40:44 compute-0 podman[203899]: podman_exporter
Dec 09 10:40:44 compute-0 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec 09 10:40:44 compute-0 systemd[1]: Stopped podman_exporter container.
Dec 09 10:40:44 compute-0 systemd[1]: Starting podman_exporter container...
Dec 09 10:40:44 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab814706a1562d5947476d9c9c0756fbac53d9e81fc06504309d792bac01d5d/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab814706a1562d5947476d9c9c0756fbac53d9e81fc06504309d792bac01d5d/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:44 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.
Dec 09 10:40:44 compute-0 podman[203912]: 2025-12-09 10:40:44.325448593 +0000 UTC m=+0.144123899 container init 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 09 10:40:44 compute-0 podman_exporter[203927]: ts=2025-12-09T10:40:44.349Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 09 10:40:44 compute-0 podman_exporter[203927]: ts=2025-12-09T10:40:44.349Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 09 10:40:44 compute-0 podman_exporter[203927]: ts=2025-12-09T10:40:44.350Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 09 10:40:44 compute-0 podman_exporter[203927]: ts=2025-12-09T10:40:44.350Z caller=handler.go:105 level=info collector=container
Dec 09 10:40:44 compute-0 podman[203687]: @ - - [09/Dec/2025:10:40:44 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 09 10:40:44 compute-0 podman[203687]: time="2025-12-09T10:40:44Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:40:44 compute-0 podman[203912]: 2025-12-09 10:40:44.362895109 +0000 UTC m=+0.181570455 container start 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 10:40:44 compute-0 podman[203912]: podman_exporter
Dec 09 10:40:44 compute-0 systemd[1]: Started podman_exporter container.
Dec 09 10:40:44 compute-0 podman[203687]: @ - - [09/Dec/2025:10:40:44 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19589 "" "Go-http-client/1.1"
Dec 09 10:40:44 compute-0 podman_exporter[203927]: ts=2025-12-09T10:40:44.393Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec 09 10:40:44 compute-0 podman_exporter[203927]: ts=2025-12-09T10:40:44.394Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec 09 10:40:44 compute-0 podman_exporter[203927]: ts=2025-12-09T10:40:44.394Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec 09 10:40:44 compute-0 sudo[203865]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:44 compute-0 podman[203938]: 2025-12-09 10:40:44.453623178 +0000 UTC m=+0.075545119 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 10:40:44 compute-0 sudo[204113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wluznxbvsrytitrzttkvtirgmwhnsmgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276844.612743-748-49343517884362/AnsiballZ_stat.py'
Dec 09 10:40:44 compute-0 sudo[204113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:45 compute-0 python3.9[204115]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:40:45 compute-0 sudo[204113]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:45 compute-0 sudo[204236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oefjcxtivfdsjdjohzbrzuslktorqpcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276844.612743-748-49343517884362/AnsiballZ_copy.py'
Dec 09 10:40:45 compute-0 sudo[204236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:45 compute-0 python3.9[204238]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276844.612743-748-49343517884362/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:40:45 compute-0 sudo[204236]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:46 compute-0 sudo[204388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npugamtgefmdubmebtgpyaucfqutmykq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276846.0992196-765-206094810110148/AnsiballZ_container_config_data.py'
Dec 09 10:40:46 compute-0 sudo[204388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:46 compute-0 python3.9[204390]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec 09 10:40:46 compute-0 sudo[204388]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:47 compute-0 sudo[204540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaromclcbngxewtwsyxlgwhznvmtfdtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276846.8451679-774-211521061164366/AnsiballZ_container_config_hash.py'
Dec 09 10:40:47 compute-0 sudo[204540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:47 compute-0 python3.9[204542]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 09 10:40:47 compute-0 sudo[204540]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:47 compute-0 sudo[204692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msttjkpnjxxlelrimwizbseogisjhavv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765276847.6611633-784-265229042220346/AnsiballZ_edpm_container_manage.py'
Dec 09 10:40:47 compute-0 sudo[204692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:48 compute-0 python3[204694]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 09 10:40:51 compute-0 podman[204706]: 2025-12-09 10:40:51.735600487 +0000 UTC m=+3.366193530 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 09 10:40:51 compute-0 podman[204804]: 2025-12-09 10:40:51.885410249 +0000 UTC m=+0.061826757 container create 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter)
Dec 09 10:40:51 compute-0 podman[204804]: 2025-12-09 10:40:51.851173901 +0000 UTC m=+0.027590419 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 09 10:40:51 compute-0 python3[204694]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec 09 10:40:52 compute-0 sudo[204692]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:52 compute-0 sudo[205004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npegpptuynppwxtfwwxvrdixdznugptg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276852.3307421-792-57202359151726/AnsiballZ_stat.py'
Dec 09 10:40:52 compute-0 sudo[205004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:52 compute-0 podman[204966]: 2025-12-09 10:40:52.689227923 +0000 UTC m=+0.089667982 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4)
Dec 09 10:40:52 compute-0 systemd[1]: b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614-4dddc5eb05d7029b.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 10:40:52 compute-0 systemd[1]: b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614-4dddc5eb05d7029b.service: Failed with result 'exit-code'.
Dec 09 10:40:52 compute-0 python3.9[205013]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:40:52 compute-0 sudo[205004]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:53 compute-0 sudo[205181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rknyzqlbbhqbdjshimmfrktmlhgnpiyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276853.200187-801-66676142257768/AnsiballZ_file.py'
Dec 09 10:40:53 compute-0 sudo[205181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:53 compute-0 podman[205139]: 2025-12-09 10:40:53.625296153 +0000 UTC m=+0.082821566 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 09 10:40:53 compute-0 python3.9[205186]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:40:53 compute-0 sudo[205181]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:54 compute-0 sudo[205335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqhamkvvpbnkaopeswcudawdggzflxuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276853.9346297-801-275817525910645/AnsiballZ_copy.py'
Dec 09 10:40:54 compute-0 sudo[205335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:54 compute-0 python3.9[205337]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276853.9346297-801-275817525910645/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:40:54 compute-0 sudo[205335]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:54 compute-0 sudo[205411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuksirmjiynysztraucvapbzlsssunbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276853.9346297-801-275817525910645/AnsiballZ_systemd.py'
Dec 09 10:40:54 compute-0 sudo[205411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:55 compute-0 python3.9[205413]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:40:55 compute-0 systemd[1]: Reloading.
Dec 09 10:40:55 compute-0 systemd-sysv-generator[205444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:40:55 compute-0 systemd-rc-local-generator[205440]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:40:55 compute-0 sudo[205411]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:55 compute-0 sudo[205522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czoyomuizmxeoilbnsdsivhcvbwpcjun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276853.9346297-801-275817525910645/AnsiballZ_systemd.py'
Dec 09 10:40:55 compute-0 sudo[205522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:56 compute-0 python3.9[205524]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:40:56 compute-0 systemd[1]: Reloading.
Dec 09 10:40:56 compute-0 systemd-rc-local-generator[205552]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:40:56 compute-0 systemd-sysv-generator[205555]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:40:56 compute-0 systemd[1]: Starting openstack_network_exporter container...
Dec 09 10:40:56 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:40:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bda20fe71ffab57c7c429717196f0a3056d88e31094a35e7c8ebd9592a52fde/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bda20fe71ffab57c7c429717196f0a3056d88e31094a35e7c8ebd9592a52fde/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bda20fe71ffab57c7c429717196f0a3056d88e31094a35e7c8ebd9592a52fde/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:56 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.
Dec 09 10:40:56 compute-0 podman[205564]: 2025-12-09 10:40:56.733879058 +0000 UTC m=+0.155577140 container init 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm)
Dec 09 10:40:56 compute-0 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *bridge.Collector
Dec 09 10:40:56 compute-0 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *coverage.Collector
Dec 09 10:40:56 compute-0 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *datapath.Collector
Dec 09 10:40:56 compute-0 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *iface.Collector
Dec 09 10:40:56 compute-0 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *memory.Collector
Dec 09 10:40:56 compute-0 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *ovnnorthd.Collector
Dec 09 10:40:56 compute-0 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *ovn.Collector
Dec 09 10:40:56 compute-0 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *ovsdbserver.Collector
Dec 09 10:40:56 compute-0 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *pmd_perf.Collector
Dec 09 10:40:56 compute-0 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *pmd_rxq.Collector
Dec 09 10:40:56 compute-0 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *vswitch.Collector
Dec 09 10:40:56 compute-0 openstack_network_exporter[205580]: NOTICE  10:40:56 main.go:76: listening on https://:9105/metrics
Dec 09 10:40:56 compute-0 podman[205564]: 2025-12-09 10:40:56.763840519 +0000 UTC m=+0.185538581 container start 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, distribution-scope=public, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 09 10:40:56 compute-0 podman[205564]: openstack_network_exporter
Dec 09 10:40:56 compute-0 systemd[1]: Started openstack_network_exporter container.
Dec 09 10:40:56 compute-0 sudo[205522]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:56 compute-0 podman[205585]: 2025-12-09 10:40:56.880150353 +0000 UTC m=+0.099234031 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41)
Dec 09 10:40:57 compute-0 sudo[205763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxhlmfaeniehqudzbxakeajwdbnqosku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276857.017904-825-55614723904702/AnsiballZ_systemd.py'
Dec 09 10:40:57 compute-0 sudo[205763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:57 compute-0 python3.9[205765]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:40:57 compute-0 systemd[1]: Stopping openstack_network_exporter container...
Dec 09 10:40:57 compute-0 systemd[1]: libpod-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope: Deactivated successfully.
Dec 09 10:40:57 compute-0 podman[205769]: 2025-12-09 10:40:57.745787214 +0000 UTC m=+0.059817953 container died 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, version=9.6, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_id=edpm, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=)
Dec 09 10:40:57 compute-0 systemd[1]: 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d-4a56f681095a7cee.timer: Deactivated successfully.
Dec 09 10:40:57 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.
Dec 09 10:40:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d-userdata-shm.mount: Deactivated successfully.
Dec 09 10:40:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bda20fe71ffab57c7c429717196f0a3056d88e31094a35e7c8ebd9592a52fde-merged.mount: Deactivated successfully.
Dec 09 10:40:58 compute-0 podman[205769]: 2025-12-09 10:40:58.444301822 +0000 UTC m=+0.758332521 container cleanup 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 09 10:40:58 compute-0 podman[205769]: openstack_network_exporter
Dec 09 10:40:58 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 09 10:40:58 compute-0 podman[205794]: openstack_network_exporter
Dec 09 10:40:58 compute-0 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec 09 10:40:58 compute-0 systemd[1]: Stopped openstack_network_exporter container.
Dec 09 10:40:58 compute-0 systemd[1]: Starting openstack_network_exporter container...
Dec 09 10:40:58 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bda20fe71ffab57c7c429717196f0a3056d88e31094a35e7c8ebd9592a52fde/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bda20fe71ffab57c7c429717196f0a3056d88e31094a35e7c8ebd9592a52fde/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bda20fe71ffab57c7c429717196f0a3056d88e31094a35e7c8ebd9592a52fde/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec 09 10:40:58 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.
Dec 09 10:40:58 compute-0 podman[205807]: 2025-12-09 10:40:58.696585282 +0000 UTC m=+0.150850821 container init 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 09 10:40:58 compute-0 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *bridge.Collector
Dec 09 10:40:58 compute-0 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *coverage.Collector
Dec 09 10:40:58 compute-0 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *datapath.Collector
Dec 09 10:40:58 compute-0 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *iface.Collector
Dec 09 10:40:58 compute-0 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *memory.Collector
Dec 09 10:40:58 compute-0 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *ovnnorthd.Collector
Dec 09 10:40:58 compute-0 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *ovn.Collector
Dec 09 10:40:58 compute-0 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *ovsdbserver.Collector
Dec 09 10:40:58 compute-0 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *pmd_perf.Collector
Dec 09 10:40:58 compute-0 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *pmd_rxq.Collector
Dec 09 10:40:58 compute-0 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *vswitch.Collector
Dec 09 10:40:58 compute-0 openstack_network_exporter[205823]: NOTICE  10:40:58 main.go:76: listening on https://:9105/metrics
Dec 09 10:40:58 compute-0 podman[205807]: 2025-12-09 10:40:58.72747377 +0000 UTC m=+0.181739309 container start 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, release=1755695350, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm)
Dec 09 10:40:58 compute-0 podman[205807]: openstack_network_exporter
Dec 09 10:40:58 compute-0 systemd[1]: Started openstack_network_exporter container.
Dec 09 10:40:58 compute-0 sudo[205763]: pam_unix(sudo:session): session closed for user root
Dec 09 10:40:58 compute-0 podman[205833]: 2025-12-09 10:40:58.814632503 +0000 UTC m=+0.074884531 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, release=1755695350, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6)
Dec 09 10:40:59 compute-0 sudo[206003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rehascwiwvkshtirwjomuqjmziokxbdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276858.964469-833-73645951434846/AnsiballZ_find.py'
Dec 09 10:40:59 compute-0 sudo[206003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:40:59 compute-0 python3.9[206005]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 09 10:40:59 compute-0 sudo[206003]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:00 compute-0 sudo[206169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efxcoxgfpgipwyizaxmvulwucoewfaxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276859.8200927-843-59714459946377/AnsiballZ_podman_container_info.py'
Dec 09 10:41:00 compute-0 sudo[206169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:00 compute-0 podman[206129]: 2025-12-09 10:41:00.460250962 +0000 UTC m=+0.119904432 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 09 10:41:00 compute-0 python3.9[206175]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec 09 10:41:00 compute-0 sudo[206169]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:01 compute-0 sudo[206344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfazskinmwmbsnjgibthzvugnlkqzdrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276860.9310393-851-168071717912979/AnsiballZ_podman_container_exec.py'
Dec 09 10:41:01 compute-0 sudo[206344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:01 compute-0 python3.9[206346]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:41:02 compute-0 systemd[1]: Started libpod-conmon-e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.scope.
Dec 09 10:41:02 compute-0 podman[206347]: 2025-12-09 10:41:02.064464268 +0000 UTC m=+0.122251306 container exec e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 09 10:41:02 compute-0 podman[206347]: 2025-12-09 10:41:02.096074404 +0000 UTC m=+0.153861442 container exec_died e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 09 10:41:02 compute-0 systemd[1]: libpod-conmon-e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.scope: Deactivated successfully.
Dec 09 10:41:02 compute-0 sudo[206344]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:02 compute-0 podman[206379]: 2025-12-09 10:41:02.23718775 +0000 UTC m=+0.066714530 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 10:41:02 compute-0 sudo[206550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgbmwmxvgrnytmrxfslhjlptgjbuqmjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276862.3177464-859-165737969272378/AnsiballZ_podman_container_exec.py'
Dec 09 10:41:02 compute-0 sudo[206550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:02 compute-0 python3.9[206552]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:41:02 compute-0 systemd[1]: Started libpod-conmon-e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.scope.
Dec 09 10:41:02 compute-0 podman[206553]: 2025-12-09 10:41:02.958419266 +0000 UTC m=+0.089871728 container exec e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 10:41:03 compute-0 podman[206572]: 2025-12-09 10:41:03.021002562 +0000 UTC m=+0.047646973 container exec_died e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true)
Dec 09 10:41:03 compute-0 podman[206553]: 2025-12-09 10:41:03.026854031 +0000 UTC m=+0.158306473 container exec_died e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 09 10:41:03 compute-0 systemd[1]: libpod-conmon-e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.scope: Deactivated successfully.
Dec 09 10:41:03 compute-0 sudo[206550]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:03 compute-0 sudo[206736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwkmgszsannrgjxjfyhiuxhijvpmimtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276863.6823423-867-166412309210213/AnsiballZ_file.py'
Dec 09 10:41:03 compute-0 sudo[206736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:04 compute-0 python3.9[206738]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:04 compute-0 sudo[206736]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:04 compute-0 sshd-session[206661]: Connection closed by authenticating user daemon 159.223.8.217 port 51154 [preauth]
Dec 09 10:41:04 compute-0 sudo[206888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfhgptmgcmcllenpszdeackpzmxqxxbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276864.3815095-876-219593135303282/AnsiballZ_podman_container_info.py'
Dec 09 10:41:04 compute-0 sudo[206888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:04 compute-0 python3.9[206890]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec 09 10:41:04 compute-0 sudo[206888]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:05 compute-0 sudo[207053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coenywzkbrjuwvvvndfpymqqofokztar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276865.3285036-884-208649685720250/AnsiballZ_podman_container_exec.py'
Dec 09 10:41:05 compute-0 sudo[207053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:05 compute-0 python3.9[207055]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:41:05 compute-0 systemd[1]: Started libpod-conmon-8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.scope.
Dec 09 10:41:05 compute-0 podman[207056]: 2025-12-09 10:41:05.914029442 +0000 UTC m=+0.083739731 container exec 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 09 10:41:05 compute-0 podman[207056]: 2025-12-09 10:41:05.945006042 +0000 UTC m=+0.114716371 container exec_died 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 09 10:41:05 compute-0 systemd[1]: libpod-conmon-8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.scope: Deactivated successfully.
Dec 09 10:41:05 compute-0 sudo[207053]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:06 compute-0 sudo[207235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thxplcujvwghmvjiaihnochwnpgphysp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276866.1682227-892-151212968256426/AnsiballZ_podman_container_exec.py'
Dec 09 10:41:06 compute-0 sudo[207235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:06 compute-0 python3.9[207237]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:41:06 compute-0 systemd[1]: Started libpod-conmon-8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.scope.
Dec 09 10:41:06 compute-0 podman[207238]: 2025-12-09 10:41:06.788067061 +0000 UTC m=+0.095929893 container exec 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 09 10:41:06 compute-0 podman[207238]: 2025-12-09 10:41:06.822207626 +0000 UTC m=+0.130070448 container exec_died 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 09 10:41:06 compute-0 systemd[1]: libpod-conmon-8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.scope: Deactivated successfully.
Dec 09 10:41:06 compute-0 sudo[207235]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:07 compute-0 sudo[207419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avtlbsymorfcwqrhyfahdtlgmagxykhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276867.043347-900-37588562071009/AnsiballZ_file.py'
Dec 09 10:41:07 compute-0 sudo[207419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:07 compute-0 python3.9[207421]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:07 compute-0 sudo[207419]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:08 compute-0 sudo[207571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvqrqmbzbtebctmrijqgearxxmpcsttu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276867.8512638-909-196169436731387/AnsiballZ_podman_container_info.py'
Dec 09 10:41:08 compute-0 sudo[207571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:08 compute-0 python3.9[207573]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec 09 10:41:08 compute-0 sudo[207571]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:08 compute-0 sudo[207736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsgcmmtqitwxtldzyhklloprrjvfmuhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276868.634623-917-110687847187154/AnsiballZ_podman_container_exec.py'
Dec 09 10:41:08 compute-0 sudo[207736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:09 compute-0 python3.9[207738]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:41:09 compute-0 systemd[1]: Started libpod-conmon-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope.
Dec 09 10:41:09 compute-0 podman[207739]: 2025-12-09 10:41:09.273376216 +0000 UTC m=+0.082354573 container exec 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:41:09 compute-0 podman[207739]: 2025-12-09 10:41:09.304221332 +0000 UTC m=+0.113199679 container exec_died 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 09 10:41:09 compute-0 systemd[1]: libpod-conmon-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope: Deactivated successfully.
Dec 09 10:41:09 compute-0 sudo[207736]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:09 compute-0 sudo[207921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbocgzhuosrrkkhmptiulgrjkorzlcst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276869.5211086-925-40121767563309/AnsiballZ_podman_container_exec.py'
Dec 09 10:41:09 compute-0 sudo[207921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:10 compute-0 python3.9[207923]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:41:10 compute-0 systemd[1]: Started libpod-conmon-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope.
Dec 09 10:41:10 compute-0 podman[207924]: 2025-12-09 10:41:10.169402101 +0000 UTC m=+0.092850919 container exec 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 10:41:10 compute-0 podman[207924]: 2025-12-09 10:41:10.203185217 +0000 UTC m=+0.126633995 container exec_died 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 09 10:41:10 compute-0 systemd[1]: libpod-conmon-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope: Deactivated successfully.
Dec 09 10:41:10 compute-0 sudo[207921]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:10 compute-0 sudo[208115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvqurchoykuqimtljjzohcxqwbumnkpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276870.4604287-933-113340322075060/AnsiballZ_file.py'
Dec 09 10:41:10 compute-0 sudo[208115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:10 compute-0 podman[208077]: 2025-12-09 10:41:10.781724532 +0000 UTC m=+0.063607155 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:41:10 compute-0 python3.9[208124]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:10 compute-0 sudo[208115]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:11 compute-0 sudo[208275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuhkuntrcogixfyxjmqgqeutzuqwvabv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276871.1875627-942-91331690166941/AnsiballZ_podman_container_info.py'
Dec 09 10:41:11 compute-0 sudo[208275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:11 compute-0 python3.9[208277]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec 09 10:41:11 compute-0 sudo[208275]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:12 compute-0 sudo[208441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhkhimlmakpqhnzrovsnlyvqrrsngztg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276872.0799499-950-193440711597448/AnsiballZ_podman_container_exec.py'
Dec 09 10:41:12 compute-0 sudo[208441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:12 compute-0 python3.9[208443]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:41:12 compute-0 systemd[1]: Started libpod-conmon-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope.
Dec 09 10:41:12 compute-0 podman[208444]: 2025-12-09 10:41:12.704818164 +0000 UTC m=+0.096585670 container exec b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 09 10:41:12 compute-0 podman[208444]: 2025-12-09 10:41:12.742367462 +0000 UTC m=+0.134134878 container exec_died b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 10:41:12 compute-0 systemd[1]: libpod-conmon-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope: Deactivated successfully.
Dec 09 10:41:12 compute-0 sudo[208441]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:13 compute-0 sudo[208626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvqrqusuudbbttwhpqkbkbqchgxmcsva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276873.074705-958-123181108148859/AnsiballZ_podman_container_exec.py'
Dec 09 10:41:13 compute-0 sudo[208626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:13 compute-0 python3.9[208628]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:41:13 compute-0 systemd[1]: Started libpod-conmon-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope.
Dec 09 10:41:13 compute-0 podman[208629]: 2025-12-09 10:41:13.652242382 +0000 UTC m=+0.078518210 container exec b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec 09 10:41:13 compute-0 podman[208629]: 2025-12-09 10:41:13.68755474 +0000 UTC m=+0.113830598 container exec_died b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 09 10:41:13 compute-0 systemd[1]: libpod-conmon-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope: Deactivated successfully.
Dec 09 10:41:13 compute-0 sudo[208626]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:14 compute-0 sudo[208812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccztiwkkufwubbayclspnaksrynakzmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276873.9293733-966-259960443473489/AnsiballZ_file.py'
Dec 09 10:41:14 compute-0 sudo[208812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:14 compute-0 python3.9[208814]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:14 compute-0 sudo[208812]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:14 compute-0 podman[208914]: 2025-12-09 10:41:14.932931966 +0000 UTC m=+0.084121282 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 09 10:41:15 compute-0 sudo[208988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zctanseyjvrfcrifgsbfttiakultjngh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276874.6793232-975-206774840785140/AnsiballZ_podman_container_info.py'
Dec 09 10:41:15 compute-0 sudo[208988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:15 compute-0 python3.9[208990]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec 09 10:41:15 compute-0 sudo[208988]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:15 compute-0 sudo[209153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxvwkrevitccjelopcjhadhelxibdksw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276875.6112976-983-123265040671035/AnsiballZ_podman_container_exec.py'
Dec 09 10:41:15 compute-0 sudo[209153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:16 compute-0 python3.9[209155]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:41:16 compute-0 systemd[1]: Started libpod-conmon-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope.
Dec 09 10:41:16 compute-0 podman[209156]: 2025-12-09 10:41:16.272050134 +0000 UTC m=+0.091853172 container exec d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 10:41:16 compute-0 podman[209156]: 2025-12-09 10:41:16.305308826 +0000 UTC m=+0.125111844 container exec_died d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 10:41:16 compute-0 systemd[1]: libpod-conmon-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope: Deactivated successfully.
Dec 09 10:41:16 compute-0 sudo[209153]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:16 compute-0 sudo[209337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tquzjlpwsahfzaznygythmrkicxzuxlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276876.5053875-991-45045146007198/AnsiballZ_podman_container_exec.py'
Dec 09 10:41:16 compute-0 sudo[209337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:41:16.968 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:41:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:41:16.969 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:41:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:41:16.969 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:41:17 compute-0 python3.9[209339]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:41:17 compute-0 systemd[1]: Started libpod-conmon-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope.
Dec 09 10:41:17 compute-0 podman[209340]: 2025-12-09 10:41:17.111004461 +0000 UTC m=+0.080016851 container exec d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 10:41:17 compute-0 podman[209340]: 2025-12-09 10:41:17.145086115 +0000 UTC m=+0.114098425 container exec_died d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 10:41:17 compute-0 systemd[1]: libpod-conmon-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope: Deactivated successfully.
Dec 09 10:41:17 compute-0 sudo[209337]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:17 compute-0 nova_compute[189493]: 2025-12-09 10:41:17.213 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:41:17 compute-0 nova_compute[189493]: 2025-12-09 10:41:17.214 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:41:17 compute-0 nova_compute[189493]: 2025-12-09 10:41:17.242 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:41:17 compute-0 nova_compute[189493]: 2025-12-09 10:41:17.242 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:41:17 compute-0 nova_compute[189493]: 2025-12-09 10:41:17.243 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:41:17 compute-0 nova_compute[189493]: 2025-12-09 10:41:17.243 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:41:17 compute-0 nova_compute[189493]: 2025-12-09 10:41:17.243 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:41:17 compute-0 nova_compute[189493]: 2025-12-09 10:41:17.243 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:41:17 compute-0 sudo[209521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjzmrnukvxpxdtinmaivpfmfcbjocwvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276877.3930316-999-76200987900464/AnsiballZ_file.py'
Dec 09 10:41:17 compute-0 sudo[209521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:17 compute-0 nova_compute[189493]: 2025-12-09 10:41:17.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:41:17 compute-0 nova_compute[189493]: 2025-12-09 10:41:17.874 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:41:17 compute-0 nova_compute[189493]: 2025-12-09 10:41:17.875 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:41:17 compute-0 nova_compute[189493]: 2025-12-09 10:41:17.876 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:41:17 compute-0 nova_compute[189493]: 2025-12-09 10:41:17.877 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:41:17 compute-0 python3.9[209523]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:17 compute-0 sudo[209521]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:18 compute-0 nova_compute[189493]: 2025-12-09 10:41:18.058 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:41:18 compute-0 nova_compute[189493]: 2025-12-09 10:41:18.060 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5837MB free_disk=72.23798370361328GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:41:18 compute-0 nova_compute[189493]: 2025-12-09 10:41:18.060 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:41:18 compute-0 nova_compute[189493]: 2025-12-09 10:41:18.061 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:41:18 compute-0 nova_compute[189493]: 2025-12-09 10:41:18.325 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:41:18 compute-0 nova_compute[189493]: 2025-12-09 10:41:18.325 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:41:18 compute-0 nova_compute[189493]: 2025-12-09 10:41:18.348 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:41:18 compute-0 nova_compute[189493]: 2025-12-09 10:41:18.361 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:41:18 compute-0 nova_compute[189493]: 2025-12-09 10:41:18.363 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:41:18 compute-0 nova_compute[189493]: 2025-12-09 10:41:18.363 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:41:18 compute-0 sudo[209673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axdjtpnqttkiciydregcvlgbrdiqxflt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276878.3607576-1008-271997587558430/AnsiballZ_podman_container_info.py'
Dec 09 10:41:18 compute-0 sudo[209673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:18 compute-0 python3.9[209675]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec 09 10:41:18 compute-0 sudo[209673]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:19 compute-0 nova_compute[189493]: 2025-12-09 10:41:19.361 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:41:19 compute-0 nova_compute[189493]: 2025-12-09 10:41:19.361 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:41:19 compute-0 nova_compute[189493]: 2025-12-09 10:41:19.361 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 10:41:19 compute-0 nova_compute[189493]: 2025-12-09 10:41:19.385 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 10:41:19 compute-0 nova_compute[189493]: 2025-12-09 10:41:19.386 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:41:19 compute-0 sudo[209838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjeatbjhctupebsutsfpuupimvjcskgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276879.1745791-1016-7009952829724/AnsiballZ_podman_container_exec.py'
Dec 09 10:41:19 compute-0 sudo[209838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:20 compute-0 python3.9[209840]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:41:20 compute-0 systemd[1]: Started libpod-conmon-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope.
Dec 09 10:41:20 compute-0 podman[209841]: 2025-12-09 10:41:20.141794916 +0000 UTC m=+0.102062269 container exec 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 09 10:41:20 compute-0 podman[209841]: 2025-12-09 10:41:20.172737445 +0000 UTC m=+0.133004738 container exec_died 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 09 10:41:20 compute-0 systemd[1]: libpod-conmon-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope: Deactivated successfully.
Dec 09 10:41:20 compute-0 sudo[209838]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:20 compute-0 sudo[210021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shjavswiglaggwisxfwmjdcdcueajcht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276880.3943331-1024-195094791908504/AnsiballZ_podman_container_exec.py'
Dec 09 10:41:20 compute-0 sudo[210021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:20 compute-0 python3.9[210023]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:41:21 compute-0 systemd[1]: Started libpod-conmon-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope.
Dec 09 10:41:21 compute-0 podman[210024]: 2025-12-09 10:41:21.095874224 +0000 UTC m=+0.085271223 container exec 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 09 10:41:21 compute-0 podman[210024]: 2025-12-09 10:41:21.126587846 +0000 UTC m=+0.115984795 container exec_died 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 09 10:41:21 compute-0 systemd[1]: libpod-conmon-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope: Deactivated successfully.
Dec 09 10:41:21 compute-0 sudo[210021]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:21 compute-0 sudo[210205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czttawjzsflezmhkkcsabpmumopatqth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276881.3215172-1032-20748601687388/AnsiballZ_file.py'
Dec 09 10:41:21 compute-0 sudo[210205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:21 compute-0 python3.9[210207]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:21 compute-0 sudo[210205]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:22 compute-0 sudo[210357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrkgygpcazuwhsvzftixjcuthxzwxfdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276882.0763285-1041-270797857642825/AnsiballZ_podman_container_info.py'
Dec 09 10:41:22 compute-0 sudo[210357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:22 compute-0 python3.9[210359]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec 09 10:41:22 compute-0 sudo[210357]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:22 compute-0 podman[210397]: 2025-12-09 10:41:22.92879498 +0000 UTC m=+0.085774006 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 09 10:41:23 compute-0 sudo[210542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phvcfbgvdonmomcfazjynecejqazxvab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276882.9272528-1049-139944968576115/AnsiballZ_podman_container_exec.py'
Dec 09 10:41:23 compute-0 sudo[210542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:23 compute-0 python3.9[210544]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:41:23 compute-0 systemd[1]: Started libpod-conmon-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope.
Dec 09 10:41:23 compute-0 podman[210545]: 2025-12-09 10:41:23.587542792 +0000 UTC m=+0.086087856 container exec 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container)
Dec 09 10:41:23 compute-0 podman[210545]: 2025-12-09 10:41:23.598043386 +0000 UTC m=+0.096588430 container exec_died 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter)
Dec 09 10:41:23 compute-0 systemd[1]: libpod-conmon-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope: Deactivated successfully.
Dec 09 10:41:23 compute-0 sudo[210542]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:23 compute-0 podman[210574]: 2025-12-09 10:41:23.741870216 +0000 UTC m=+0.066589937 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec 09 10:41:24 compute-0 sudo[210741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrojcvkimcxlbpruqrpocdtpdsllugmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276883.8415875-1057-275211499287959/AnsiballZ_podman_container_exec.py'
Dec 09 10:41:24 compute-0 sudo[210741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:24 compute-0 python3.9[210743]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:41:24 compute-0 systemd[1]: Started libpod-conmon-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope.
Dec 09 10:41:24 compute-0 podman[210744]: 2025-12-09 10:41:24.482742533 +0000 UTC m=+0.077409969 container exec 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 09 10:41:24 compute-0 podman[210744]: 2025-12-09 10:41:24.516116468 +0000 UTC m=+0.110783884 container exec_died 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-type=git, distribution-scope=public, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 09 10:41:24 compute-0 systemd[1]: libpod-conmon-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope: Deactivated successfully.
Dec 09 10:41:24 compute-0 sudo[210741]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:25 compute-0 sudo[210924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apjhcxmiupbdurviulbslabjiffnrfny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276884.728538-1065-5698793532907/AnsiballZ_file.py'
Dec 09 10:41:25 compute-0 sudo[210924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:25 compute-0 python3.9[210926]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:25 compute-0 sudo[210924]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:25 compute-0 sudo[211076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aybzxjzcidvontbuidkudoqipozdvbmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276885.4763696-1074-58936263491205/AnsiballZ_file.py'
Dec 09 10:41:25 compute-0 sudo[211076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:25 compute-0 python3.9[211078]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:25 compute-0 sudo[211076]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:26 compute-0 sudo[211228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdnymrvdotgggrgzxeleaqpuewnlsuyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276886.156458-1082-230601162739379/AnsiballZ_stat.py'
Dec 09 10:41:26 compute-0 sudo[211228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:26 compute-0 python3.9[211230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:41:26 compute-0 sudo[211228]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:27 compute-0 sudo[211351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxdlofdmmkomckwxaausbskgsvualjpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276886.156458-1082-230601162739379/AnsiballZ_copy.py'
Dec 09 10:41:27 compute-0 sudo[211351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:27 compute-0 python3.9[211353]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276886.156458-1082-230601162739379/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:27 compute-0 sudo[211351]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:28 compute-0 sudo[211503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aijuqddfjgkibjrhboffiuctloupqypg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276887.7813802-1098-76376890839899/AnsiballZ_file.py'
Dec 09 10:41:28 compute-0 sudo[211503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:28 compute-0 python3.9[211505]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:28 compute-0 sudo[211503]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:28 compute-0 sudo[211655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aycjoajlcrjuhokhqoufngvdlbzrbzli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276888.506894-1106-139524327740455/AnsiballZ_stat.py'
Dec 09 10:41:28 compute-0 sudo[211655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:29 compute-0 python3.9[211657]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:41:29 compute-0 sudo[211655]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:29 compute-0 sudo[211746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnqnnpnjzuiiswiqcixxvxduwxauzmwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276888.506894-1106-139524327740455/AnsiballZ_file.py'
Dec 09 10:41:29 compute-0 sudo[211746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:29 compute-0 podman[211707]: 2025-12-09 10:41:29.387109948 +0000 UTC m=+0.111959347 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, distribution-scope=public, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 09 10:41:29 compute-0 python3.9[211754]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:29 compute-0 sudo[211746]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:30 compute-0 sudo[211906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztyyvimrfgoaruwucgkvfwmjtaxweyyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276889.7834737-1118-100940586290144/AnsiballZ_stat.py'
Dec 09 10:41:30 compute-0 sudo[211906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:30 compute-0 python3.9[211908]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:41:30 compute-0 sudo[211906]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:30 compute-0 sudo[211993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlosvatlgvfbqmknkqxnrfkadmgddvug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276889.7834737-1118-100940586290144/AnsiballZ_file.py'
Dec 09 10:41:30 compute-0 sudo[211993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:30 compute-0 podman[211958]: 2025-12-09 10:41:30.657690497 +0000 UTC m=+0.096954659 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:41:30 compute-0 python3.9[212001]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.545i0732 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:30 compute-0 sudo[211993]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:31 compute-0 sudo[212159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymfztsyspeicmloiukoyfqoyeirpgfrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276890.9976156-1130-153122181638835/AnsiballZ_stat.py'
Dec 09 10:41:31 compute-0 sudo[212159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:31 compute-0 python3.9[212161]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:41:31 compute-0 sudo[212159]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:31 compute-0 sudo[212237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdnokcdcdqalvrnuvxxioqzzrhubuhxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276890.9976156-1130-153122181638835/AnsiballZ_file.py'
Dec 09 10:41:31 compute-0 sudo[212237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:32 compute-0 python3.9[212239]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:32 compute-0 sudo[212237]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:32 compute-0 sudo[212399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srthhwmtnqykywbpodlwdqzoltpcmftm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276892.3106163-1143-269574439115338/AnsiballZ_command.py'
Dec 09 10:41:32 compute-0 sudo[212399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:32 compute-0 podman[212363]: 2025-12-09 10:41:32.679207707 +0000 UTC m=+0.080204625 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 10:41:32 compute-0 python3.9[212404]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:41:32 compute-0 sudo[212399]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:33 compute-0 sshd-session[212460]: Connection closed by authenticating user daemon 159.223.8.217 port 44054 [preauth]
Dec 09 10:41:33 compute-0 sudo[212566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evvxcildtygtfgwwhgqmqgtzkmgyuyyk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765276893.0958223-1151-13922760270557/AnsiballZ_edpm_nftables_from_files.py'
Dec 09 10:41:33 compute-0 sudo[212566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:33 compute-0 python3[212568]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 09 10:41:33 compute-0 sudo[212566]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:34 compute-0 sudo[212718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khdruvwnrfpnjuvtmnetwyjrlilirhbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276894.0708418-1159-122093221759008/AnsiballZ_stat.py'
Dec 09 10:41:34 compute-0 sudo[212718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:34 compute-0 python3.9[212720]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:41:34 compute-0 sudo[212718]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:34 compute-0 sudo[212796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlvvcryonhnudmcposryryuuuxmgssgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276894.0708418-1159-122093221759008/AnsiballZ_file.py'
Dec 09 10:41:34 compute-0 sudo[212796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:35 compute-0 python3.9[212798]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:35 compute-0 sudo[212796]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:35 compute-0 sudo[212948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysyaqtqgcitdbdpwqozpqzaupglccvgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276895.5726087-1171-97427236357215/AnsiballZ_stat.py'
Dec 09 10:41:35 compute-0 sudo[212948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:36 compute-0 python3.9[212950]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:41:36 compute-0 sudo[212948]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:36 compute-0 sudo[213026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omhbgjzwjrplvzozgvcurohbwqdcwhrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276895.5726087-1171-97427236357215/AnsiballZ_file.py'
Dec 09 10:41:36 compute-0 sudo[213026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:36 compute-0 python3.9[213028]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:36 compute-0 sudo[213026]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:37 compute-0 sudo[213178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmipgaczjqjypggxljfyaccgzkofifsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276896.8470483-1183-63519383344974/AnsiballZ_stat.py'
Dec 09 10:41:37 compute-0 sudo[213178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:37 compute-0 python3.9[213180]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:41:37 compute-0 sudo[213178]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:37 compute-0 sudo[213256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciphvjttlplpmcjtodebtuukykzovmrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276896.8470483-1183-63519383344974/AnsiballZ_file.py'
Dec 09 10:41:37 compute-0 sudo[213256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:37 compute-0 python3.9[213258]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:37 compute-0 sudo[213256]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:38 compute-0 sudo[213408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhfhxchzjyaefbbvwekcbkbqgsudpufe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276898.0883808-1195-144688087343354/AnsiballZ_stat.py'
Dec 09 10:41:38 compute-0 sudo[213408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:38 compute-0 python3.9[213410]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:41:38 compute-0 sudo[213408]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:38 compute-0 sudo[213486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adjxylykvxpdmbemsapvvczwpgfhlewp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276898.0883808-1195-144688087343354/AnsiballZ_file.py'
Dec 09 10:41:38 compute-0 sudo[213486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:39 compute-0 python3.9[213488]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:39 compute-0 sudo[213486]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:39 compute-0 sudo[213638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrryjkkpgtoivqiycrjvjbhrlioeslsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276899.2846398-1207-97465057705155/AnsiballZ_stat.py'
Dec 09 10:41:39 compute-0 sudo[213638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:39 compute-0 python3.9[213640]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:41:39 compute-0 sudo[213638]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:40 compute-0 sudo[213763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpyxoxiakxouqotvausvjpokexeifout ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276899.2846398-1207-97465057705155/AnsiballZ_copy.py'
Dec 09 10:41:40 compute-0 sudo[213763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:40 compute-0 python3.9[213765]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276899.2846398-1207-97465057705155/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:40 compute-0 sudo[213763]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:40 compute-0 podman[213865]: 2025-12-09 10:41:40.932137827 +0000 UTC m=+0.076773746 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:41:40 compute-0 sudo[213935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndmhkkplouubjavprqezqljjmtrwxdfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276900.6553507-1222-24829361983543/AnsiballZ_file.py'
Dec 09 10:41:40 compute-0 sudo[213935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:41 compute-0 python3.9[213937]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:41 compute-0 sudo[213935]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:41 compute-0 sudo[214087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mavveclxdyagwophqflssimkziqzyery ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276901.3875318-1230-255852628890726/AnsiballZ_command.py'
Dec 09 10:41:41 compute-0 sudo[214087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:41 compute-0 python3.9[214089]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:41:41 compute-0 sudo[214087]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:42 compute-0 sudo[214242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gadojxcjnzbpcbnvfrzfxcedelzzuecj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276902.1773503-1238-10870305431024/AnsiballZ_blockinfile.py'
Dec 09 10:41:42 compute-0 sudo[214242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:43 compute-0 python3.9[214244]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:43 compute-0 sudo[214242]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:43 compute-0 sudo[214394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmfpmtvarvjovqwqytofukfigtdzjqez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276903.3878121-1247-150163770327096/AnsiballZ_command.py'
Dec 09 10:41:43 compute-0 sudo[214394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:43 compute-0 python3.9[214396]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:41:43 compute-0 sudo[214394]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:44 compute-0 sudo[214547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prnyrmhjicglceryvussleayakttafgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276904.0874536-1255-96539666251594/AnsiballZ_stat.py'
Dec 09 10:41:44 compute-0 sudo[214547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:44 compute-0 python3.9[214549]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:41:44 compute-0 sudo[214547]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:45 compute-0 sudo[214714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kluuqxmwmridxopmegvtyosdyxvpbice ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276904.800575-1263-229677613480899/AnsiballZ_command.py'
Dec 09 10:41:45 compute-0 sudo[214714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:45 compute-0 podman[214675]: 2025-12-09 10:41:45.168279528 +0000 UTC m=+0.065772136 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 10:41:45 compute-0 python3.9[214720]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:41:45 compute-0 sudo[214714]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:45 compute-0 sudo[214880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hupcdootvofhdqyivcdeekhxrehecehi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276905.5775483-1271-263589435930846/AnsiballZ_file.py'
Dec 09 10:41:45 compute-0 sudo[214880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:46 compute-0 python3.9[214882]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:41:46 compute-0 sudo[214880]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:46 compute-0 sshd-session[189801]: Connection closed by 192.168.122.30 port 36668
Dec 09 10:41:46 compute-0 sshd-session[189792]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:41:46 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Dec 09 10:41:46 compute-0 systemd[1]: session-26.scope: Consumed 1min 46.429s CPU time.
Dec 09 10:41:46 compute-0 systemd-logind[806]: Session 26 logged out. Waiting for processes to exit.
Dec 09 10:41:46 compute-0 systemd-logind[806]: Removed session 26.
Dec 09 10:41:51 compute-0 sshd-session[214909]: Accepted publickey for zuul from 192.168.122.30 port 39610 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:41:51 compute-0 systemd-logind[806]: New session 27 of user zuul.
Dec 09 10:41:51 compute-0 systemd[1]: Started Session 27 of User zuul.
Dec 09 10:41:51 compute-0 sshd-session[214909]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:41:52 compute-0 sudo[215062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrmunipmbmhdovnmrmdqhgxoacljswxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276912.0745857-24-82883062253929/AnsiballZ_systemd_service.py'
Dec 09 10:41:52 compute-0 sudo[215062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:52 compute-0 python3.9[215064]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:41:53 compute-0 systemd[1]: Reloading.
Dec 09 10:41:53 compute-0 systemd-sysv-generator[215108]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:41:53 compute-0 systemd-rc-local-generator[215105]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:41:53 compute-0 podman[215066]: 2025-12-09 10:41:53.171216167 +0000 UTC m=+0.104595501 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 09 10:41:53 compute-0 sudo[215062]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:53 compute-0 podman[215220]: 2025-12-09 10:41:53.937863696 +0000 UTC m=+0.083948131 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 09 10:41:54 compute-0 python3.9[215289]: ansible-ansible.builtin.service_facts Invoked
Dec 09 10:41:54 compute-0 network[215306]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 09 10:41:54 compute-0 network[215307]: 'network-scripts' will be removed from distribution in near future.
Dec 09 10:41:54 compute-0 network[215308]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 09 10:41:59 compute-0 sudo[215579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quppbzbkrokaznzsfubxrhpusrvjrlsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276918.8416336-47-213840314953752/AnsiballZ_systemd_service.py'
Dec 09 10:41:59 compute-0 sudo[215579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:41:59 compute-0 python3.9[215581]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:41:59 compute-0 sudo[215579]: pam_unix(sudo:session): session closed for user root
Dec 09 10:41:59 compute-0 podman[203687]: time="2025-12-09T10:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:41:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22542 "" "Go-http-client/1.1"
Dec 09 10:41:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3415 "" "Go-http-client/1.1"
Dec 09 10:41:59 compute-0 podman[215583]: 2025-12-09 10:41:59.838828356 +0000 UTC m=+0.102172405 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 09 10:42:00 compute-0 sudo[215754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siswivyudgqmafzoqegoqlyntzgyedrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276920.0479672-57-73896428497989/AnsiballZ_file.py'
Dec 09 10:42:00 compute-0 sudo[215754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:00 compute-0 python3.9[215756]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:00 compute-0 sudo[215754]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:00 compute-0 podman[215781]: 2025-12-09 10:42:00.993066099 +0000 UTC m=+0.136827047 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 09 10:42:01 compute-0 sudo[215930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usixzzguoyaylfprkdqrazkoetiomwpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276920.9239838-65-44664286106815/AnsiballZ_file.py'
Dec 09 10:42:01 compute-0 sudo[215930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:01 compute-0 openstack_network_exporter[205823]: ERROR   10:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:42:01 compute-0 openstack_network_exporter[205823]: ERROR   10:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:42:01 compute-0 openstack_network_exporter[205823]: ERROR   10:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:42:01 compute-0 openstack_network_exporter[205823]: ERROR   10:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:42:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:42:01 compute-0 openstack_network_exporter[205823]: ERROR   10:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:42:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:42:01 compute-0 python3.9[215932]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:01 compute-0 sudo[215930]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:02 compute-0 sudo[216087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lubwtwcsuelvvqoiwndlwudmteipdsdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276921.737107-74-42972769404496/AnsiballZ_command.py'
Dec 09 10:42:02 compute-0 sudo[216087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:02 compute-0 python3.9[216089]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:42:02 compute-0 sudo[216087]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:02 compute-0 podman[216168]: 2025-12-09 10:42:02.927301364 +0000 UTC m=+0.084252570 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 10:42:03 compute-0 python3.9[216267]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 09 10:42:03 compute-0 sshd-session[216177]: Connection closed by authenticating user daemon 159.223.8.217 port 35282 [preauth]
Dec 09 10:42:03 compute-0 sudo[216417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmnnhzhhonhmmvemfyazkwrmsldbgoyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276923.5280154-92-68342129067137/AnsiballZ_systemd_service.py'
Dec 09 10:42:03 compute-0 sudo[216417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:04 compute-0 python3.9[216419]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:42:04 compute-0 systemd[1]: Reloading.
Dec 09 10:42:04 compute-0 systemd-rc-local-generator[216436]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:42:04 compute-0 systemd-sysv-generator[216443]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:42:04 compute-0 sudo[216417]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:05 compute-0 sudo[216604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuemxsrnambakbqbgicjqpjamshqzqtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276924.8396423-100-235700093747148/AnsiballZ_command.py'
Dec 09 10:42:05 compute-0 sudo[216604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:05 compute-0 python3.9[216606]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:42:05 compute-0 sudo[216604]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:05 compute-0 sudo[216757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpybryfwpmmcqsoahxnmwkirmnpuhpld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276925.6405869-109-264598068637561/AnsiballZ_file.py'
Dec 09 10:42:05 compute-0 sudo[216757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:06 compute-0 python3.9[216759]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:42:06 compute-0 sudo[216757]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:07 compute-0 python3.9[216909]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:42:07 compute-0 python3.9[217061]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:42:08 compute-0 python3.9[217182]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276927.3273456-125-50241194790973/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:42:09 compute-0 sudo[217332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntexhpoqotckpcswxzvcxqpokbdkgzuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276929.0783489-143-266850233847065/AnsiballZ_getent.py'
Dec 09 10:42:09 compute-0 sudo[217332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:09 compute-0 python3.9[217334]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec 09 10:42:09 compute-0 sudo[217332]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:11 compute-0 python3.9[217485]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:42:11 compute-0 podman[217580]: 2025-12-09 10:42:11.522485455 +0000 UTC m=+0.093274354 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 09 10:42:11 compute-0 python3.9[217619]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765276930.5951848-171-117377740823151/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:12 compute-0 python3.9[217776]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:42:13 compute-0 python3.9[217897]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765276931.8564055-171-256552480878676/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:13 compute-0 python3.9[218047]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:42:14 compute-0 python3.9[218168]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765276933.267991-171-38003233938399/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:15 compute-0 python3.9[218318]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:42:15 compute-0 podman[218444]: 2025-12-09 10:42:15.772424912 +0000 UTC m=+0.071131873 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 09 10:42:15 compute-0 python3.9[218475]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:42:16 compute-0 python3.9[218646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:42:16 compute-0 nova_compute[189493]: 2025-12-09 10:42:16.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:42:16 compute-0 nova_compute[189493]: 2025-12-09 10:42:16.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:42:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:42:16.969 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:42:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:42:16.970 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:42:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:42:16.970 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:42:17 compute-0 python3.9[218767]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276936.1018965-230-116022038220383/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:17 compute-0 nova_compute[189493]: 2025-12-09 10:42:17.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:42:17 compute-0 nova_compute[189493]: 2025-12-09 10:42:17.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:42:17 compute-0 nova_compute[189493]: 2025-12-09 10:42:17.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:42:17 compute-0 nova_compute[189493]: 2025-12-09 10:42:17.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:42:18 compute-0 python3.9[218917]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:42:18 compute-0 python3.9[218993]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:18 compute-0 nova_compute[189493]: 2025-12-09 10:42:18.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:42:18 compute-0 nova_compute[189493]: 2025-12-09 10:42:18.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:42:18 compute-0 nova_compute[189493]: 2025-12-09 10:42:18.844 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 10:42:18 compute-0 nova_compute[189493]: 2025-12-09 10:42:18.928 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 10:42:18 compute-0 nova_compute[189493]: 2025-12-09 10:42:18.929 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:42:19 compute-0 python3.9[219143]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:42:19 compute-0 nova_compute[189493]: 2025-12-09 10:42:19.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:42:19 compute-0 nova_compute[189493]: 2025-12-09 10:42:19.870 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:42:19 compute-0 nova_compute[189493]: 2025-12-09 10:42:19.870 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:42:19 compute-0 nova_compute[189493]: 2025-12-09 10:42:19.871 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:42:19 compute-0 nova_compute[189493]: 2025-12-09 10:42:19.871 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:42:19 compute-0 python3.9[219264]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276938.6147618-230-177150612178686/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:20 compute-0 nova_compute[189493]: 2025-12-09 10:42:20.082 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:42:20 compute-0 nova_compute[189493]: 2025-12-09 10:42:20.083 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5824MB free_disk=72.23726272583008GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:42:20 compute-0 nova_compute[189493]: 2025-12-09 10:42:20.083 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:42:20 compute-0 nova_compute[189493]: 2025-12-09 10:42:20.083 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:42:20 compute-0 nova_compute[189493]: 2025-12-09 10:42:20.153 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:42:20 compute-0 nova_compute[189493]: 2025-12-09 10:42:20.154 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:42:20 compute-0 nova_compute[189493]: 2025-12-09 10:42:20.180 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:42:20 compute-0 nova_compute[189493]: 2025-12-09 10:42:20.200 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:42:20 compute-0 nova_compute[189493]: 2025-12-09 10:42:20.201 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:42:20 compute-0 nova_compute[189493]: 2025-12-09 10:42:20.202 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:42:20 compute-0 python3.9[219414]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:42:21 compute-0 nova_compute[189493]: 2025-12-09 10:42:21.202 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:42:21 compute-0 python3.9[219535]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276940.066474-230-3015603195563/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:22 compute-0 python3.9[219685]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:42:22 compute-0 python3.9[219806]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276941.4666798-230-102056999529110/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.285 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.286 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.290 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.294 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:42:23 compute-0 python3.9[219956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:42:23 compute-0 podman[220052]: 2025-12-09 10:42:23.87301562 +0000 UTC m=+0.086820389 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute)
Dec 09 10:42:24 compute-0 python3.9[220088]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276942.86059-230-87382650480819/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:24 compute-0 podman[220096]: 2025-12-09 10:42:24.134515822 +0000 UTC m=+0.050849742 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 09 10:42:24 compute-0 python3.9[220263]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:42:25 compute-0 python3.9[220339]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:25 compute-0 sudo[220489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evnwdfofvnvcggsvbuqauyayevxjvsua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276945.5628483-325-12027119370428/AnsiballZ_file.py'
Dec 09 10:42:25 compute-0 sudo[220489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:26 compute-0 python3.9[220491]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:26 compute-0 sudo[220489]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:26 compute-0 sudo[220641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inasoqqexrjhtpreaxwajmgllkhhoqet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276946.3233893-333-142052093171745/AnsiballZ_file.py'
Dec 09 10:42:26 compute-0 sudo[220641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:26 compute-0 python3.9[220643]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:26 compute-0 sudo[220641]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:27 compute-0 sudo[220793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezgyaqyghvslwbkzuoidxjaymhggobnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276947.1045527-341-80405757697561/AnsiballZ_file.py'
Dec 09 10:42:27 compute-0 sudo[220793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:27 compute-0 python3.9[220795]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:42:27 compute-0 sudo[220793]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:28 compute-0 sudo[220945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wemyetqvwegfwvmyklfxngxdpezzkphz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276947.892729-349-1279621786950/AnsiballZ_stat.py'
Dec 09 10:42:28 compute-0 sudo[220945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:28 compute-0 python3.9[220947]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:42:28 compute-0 sudo[220945]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:29 compute-0 sudo[221068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwxugyqkieampczdovqlqdqmtjttndun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276947.892729-349-1279621786950/AnsiballZ_copy.py'
Dec 09 10:42:29 compute-0 sudo[221068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:29 compute-0 python3.9[221070]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276947.892729-349-1279621786950/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:42:29 compute-0 sudo[221068]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:29 compute-0 sudo[221144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgynmdqqdzfywyqwvaqudbbclxxsuutf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276947.892729-349-1279621786950/AnsiballZ_stat.py'
Dec 09 10:42:29 compute-0 sudo[221144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:29 compute-0 podman[203687]: time="2025-12-09T10:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:42:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22542 "" "Go-http-client/1.1"
Dec 09 10:42:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3420 "" "Go-http-client/1.1"
Dec 09 10:42:29 compute-0 python3.9[221146]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:42:29 compute-0 sudo[221144]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:30 compute-0 sudo[221280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsxpxablnsfznmqvyibjdggaenpkhkhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276947.892729-349-1279621786950/AnsiballZ_copy.py'
Dec 09 10:42:30 compute-0 sudo[221280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:30 compute-0 podman[221241]: 2025-12-09 10:42:30.192603138 +0000 UTC m=+0.061281045 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 09 10:42:30 compute-0 python3.9[221290]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276947.892729-349-1279621786950/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:42:30 compute-0 sudo[221280]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:30 compute-0 sudo[221440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zylgvvpswwdzknufyepdfdefxyoymevg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276950.603975-349-192118610981644/AnsiballZ_stat.py'
Dec 09 10:42:30 compute-0 sudo[221440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:31 compute-0 python3.9[221442]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:42:31 compute-0 sudo[221440]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:31 compute-0 openstack_network_exporter[205823]: ERROR   10:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:42:31 compute-0 openstack_network_exporter[205823]: ERROR   10:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:42:31 compute-0 openstack_network_exporter[205823]: ERROR   10:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:42:31 compute-0 openstack_network_exporter[205823]: ERROR   10:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:42:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:42:31 compute-0 openstack_network_exporter[205823]: ERROR   10:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:42:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:42:31 compute-0 sudo[221579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbrmzvtladhhapuqfyfxciviyhignvyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276950.603975-349-192118610981644/AnsiballZ_copy.py'
Dec 09 10:42:31 compute-0 sudo[221579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:31 compute-0 podman[221537]: 2025-12-09 10:42:31.51472394 +0000 UTC m=+0.095035201 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 09 10:42:31 compute-0 python3.9[221586]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276950.603975-349-192118610981644/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 09 10:42:31 compute-0 sudo[221579]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:32 compute-0 sudo[221742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azhpqmjyyxgzmnvyjljofezudhrjbsim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276951.9875584-391-155318102717229/AnsiballZ_container_config_data.py'
Dec 09 10:42:32 compute-0 sudo[221742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:32 compute-0 python3.9[221744]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Dec 09 10:42:32 compute-0 sudo[221742]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:33 compute-0 podman[221870]: 2025-12-09 10:42:33.587675131 +0000 UTC m=+0.050415710 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 10:42:33 compute-0 sudo[221913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phcccdneeuvkrxfezwbhpkzxohhgkitt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276953.094212-400-238258634449092/AnsiballZ_container_config_hash.py'
Dec 09 10:42:33 compute-0 sudo[221913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:33 compute-0 sshd-session[221813]: Connection closed by authenticating user daemon 159.223.8.217 port 57276 [preauth]
Dec 09 10:42:33 compute-0 python3.9[221922]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 09 10:42:33 compute-0 sudo[221913]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:34 compute-0 sudo[222072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfiopkusatuohrxdsvpkjlvkcrekslzw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765276954.0859332-410-130040326906215/AnsiballZ_edpm_container_manage.py'
Dec 09 10:42:34 compute-0 sudo[222072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:34 compute-0 python3[222074]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Dec 09 10:42:35 compute-0 podman[222110]: 2025-12-09 10:42:35.100998215 +0000 UTC m=+0.055504967 container create ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.license=GPLv2)
Dec 09 10:42:35 compute-0 podman[222110]: 2025-12-09 10:42:35.076158342 +0000 UTC m=+0.030665124 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec 09 10:42:35 compute-0 python3[222074]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Dec 09 10:42:35 compute-0 sudo[222072]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:35 compute-0 sudo[222299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eahfnvvrwxxuwwozfgtxlviyjtbzobco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276955.458443-418-265971074411457/AnsiballZ_stat.py'
Dec 09 10:42:35 compute-0 sudo[222299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:35 compute-0 python3.9[222301]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:42:36 compute-0 sudo[222299]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:36 compute-0 sudo[222453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhbmajozqyhttcyaccgqpmmpdxugugwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276956.4136581-427-96597702896426/AnsiballZ_file.py'
Dec 09 10:42:36 compute-0 sudo[222453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:37 compute-0 python3.9[222455]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:37 compute-0 sudo[222453]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:37 compute-0 sudo[222604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhjhypilipfyvctazwgdkicssdazkpnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276957.1582315-427-230451822558060/AnsiballZ_copy.py'
Dec 09 10:42:37 compute-0 sudo[222604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:37 compute-0 python3.9[222606]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276957.1582315-427-230451822558060/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:37 compute-0 sudo[222604]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:38 compute-0 sudo[222680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-occcevlmkxfsetvzjadmttnhofliizcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276957.1582315-427-230451822558060/AnsiballZ_systemd.py'
Dec 09 10:42:38 compute-0 sudo[222680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:38 compute-0 python3.9[222682]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:42:38 compute-0 systemd[1]: Reloading.
Dec 09 10:42:38 compute-0 systemd-sysv-generator[222706]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:42:38 compute-0 systemd-rc-local-generator[222703]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:42:39 compute-0 sudo[222680]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:39 compute-0 sudo[222791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxbuxrndufxogrxigqznqpoiafunqnaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276957.1582315-427-230451822558060/AnsiballZ_systemd.py'
Dec 09 10:42:39 compute-0 sudo[222791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:39 compute-0 python3.9[222793]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:42:39 compute-0 systemd[1]: Reloading.
Dec 09 10:42:39 compute-0 systemd-sysv-generator[222821]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:42:39 compute-0 systemd-rc-local-generator[222818]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:42:40 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec 09 10:42:40 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:42:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 10:42:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 09 10:42:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 09 10:42:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 09 10:42:40 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.
Dec 09 10:42:40 compute-0 podman[222833]: 2025-12-09 10:42:40.250740287 +0000 UTC m=+0.144380352 container init ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: + sudo -E kolla_set_configs
Dec 09 10:42:40 compute-0 sudo[222854]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 09 10:42:40 compute-0 sudo[222854]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 09 10:42:40 compute-0 sudo[222854]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 09 10:42:40 compute-0 podman[222833]: 2025-12-09 10:42:40.278969914 +0000 UTC m=+0.172609949 container start ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec 09 10:42:40 compute-0 podman[222833]: ceilometer_agent_ipmi
Dec 09 10:42:40 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Dec 09 10:42:40 compute-0 sudo[222791]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: INFO:__main__:Validating config file
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: INFO:__main__:Copying service configuration files
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: INFO:__main__:Writing out command to execute
Dec 09 10:42:40 compute-0 sudo[222854]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: ++ cat /run_command
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: + ARGS=
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: + sudo kolla_copy_cacerts
Dec 09 10:42:40 compute-0 podman[222855]: 2025-12-09 10:42:40.34697929 +0000 UTC m=+0.057018429 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 09 10:42:40 compute-0 systemd[1]: ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92-d49b0d9dbbb7496.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 10:42:40 compute-0 systemd[1]: ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92-d49b0d9dbbb7496.service: Failed with result 'exit-code'.
Dec 09 10:42:40 compute-0 sudo[222878]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 09 10:42:40 compute-0 sudo[222878]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 09 10:42:40 compute-0 sudo[222878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 09 10:42:40 compute-0 sudo[222878]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: + [[ ! -n '' ]]
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: + . kolla_extend_start
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: + umask 0022
Dec 09 10:42:40 compute-0 ceilometer_agent_ipmi[222848]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec 09 10:42:41 compute-0 sudo[223030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxsbcdcufxzzktcnblicogcsqoxrjbli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276960.660237-453-176565452296520/AnsiballZ_container_config_data.py'
Dec 09 10:42:41 compute-0 sudo[223030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.271 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.271 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.271 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.271 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.271 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.271 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.271 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.271 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.306 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.308 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.309 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 09 10:42:41 compute-0 python3.9[223032]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Dec 09 10:42:41 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.429 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp1g99b78j/privsep.sock']
Dec 09 10:42:41 compute-0 sudo[223030]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:41 compute-0 sudo[223037]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmp1g99b78j/privsep.sock
Dec 09 10:42:41 compute-0 sudo[223037]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 09 10:42:41 compute-0 sudo[223037]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 09 10:42:41 compute-0 podman[223140]: 2025-12-09 10:42:41.918448293 +0000 UTC m=+0.075568373 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 09 10:42:41 compute-0 sudo[223211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozxovqtbtmohvmtucrkexiwqsxvrldzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276961.6876254-462-188079967868225/AnsiballZ_container_config_hash.py'
Dec 09 10:42:41 compute-0 sudo[223211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:42 compute-0 sudo[223037]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.110 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.111 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp1g99b78j/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.998 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.006 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.010 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.010 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec 09 10:42:42 compute-0 python3.9[223213]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 09 10:42:42 compute-0 sudo[223211]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.240 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.241 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.242 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.242 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.242 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.242 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.242 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.242 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.242 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.243 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.243 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.243 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.243 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.246 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.246 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.246 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.246 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.246 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.246 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.246 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.247 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.247 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.247 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.247 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.247 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.247 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.247 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.248 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.248 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.248 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.248 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.248 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.248 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.248 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.248 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.249 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.249 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.249 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.249 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.249 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.249 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.249 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.250 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.250 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.250 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.258 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.258 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.258 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.258 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.258 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.258 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.258 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.258 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.267 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.267 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.267 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.267 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.267 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.267 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.267 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 09 10:42:42 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.269 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 09 10:42:42 compute-0 sudo[223368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiskxcxxdjmgoyrgqudfnwnixmabcpdb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765276962.5256784-472-176836127831497/AnsiballZ_edpm_container_manage.py'
Dec 09 10:42:42 compute-0 sudo[223368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:43 compute-0 python3[223370]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Dec 09 10:42:43 compute-0 podman[223403]: 2025-12-09 10:42:43.565317939 +0000 UTC m=+0.049969057 container create 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, release=1214.1726694543, distribution-scope=public, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 09 10:42:43 compute-0 podman[223403]: 2025-12-09 10:42:43.537515775 +0000 UTC m=+0.022166913 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec 09 10:42:43 compute-0 python3[223370]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Dec 09 10:42:43 compute-0 sudo[223368]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:44 compute-0 sudo[223590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkauokcopkatibsbmovunyxyofepzixj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276963.8843076-480-165615352422855/AnsiballZ_stat.py'
Dec 09 10:42:44 compute-0 sudo[223590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:44 compute-0 python3.9[223592]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:42:44 compute-0 sudo[223590]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:44 compute-0 sudo[223744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hggwfopxmawoagpxdxvzcblwrvhawpsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276964.6465445-489-251059465705306/AnsiballZ_file.py'
Dec 09 10:42:44 compute-0 sudo[223744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:45 compute-0 python3.9[223746]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:45 compute-0 sudo[223744]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:45 compute-0 sudo[223906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvmbgzhpzaxqoihjbstbhhdwkzimtsil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276965.1988633-489-12468935243587/AnsiballZ_copy.py'
Dec 09 10:42:45 compute-0 sudo[223906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:45 compute-0 podman[223869]: 2025-12-09 10:42:45.952436565 +0000 UTC m=+0.105314879 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 10:42:46 compute-0 python3.9[223912]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276965.1988633-489-12468935243587/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:46 compute-0 sudo[223906]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:46 compute-0 sudo[223993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbzljzxwqpbcbubqynaypexwchjfdpji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276965.1988633-489-12468935243587/AnsiballZ_systemd.py'
Dec 09 10:42:46 compute-0 sudo[223993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:46 compute-0 python3.9[223995]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 09 10:42:46 compute-0 systemd[1]: Reloading.
Dec 09 10:42:46 compute-0 systemd-rc-local-generator[224021]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:42:46 compute-0 systemd-sysv-generator[224024]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:42:47 compute-0 sudo[223993]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:47 compute-0 sudo[224104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nomlmuzorpsjupnmszeyzltfdpezkfeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276965.1988633-489-12468935243587/AnsiballZ_systemd.py'
Dec 09 10:42:47 compute-0 sudo[224104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:47 compute-0 python3.9[224106]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 09 10:42:47 compute-0 systemd[1]: Reloading.
Dec 09 10:42:47 compute-0 systemd-sysv-generator[224134]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 09 10:42:47 compute-0 systemd-rc-local-generator[224131]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 09 10:42:48 compute-0 systemd[1]: Starting kepler container...
Dec 09 10:42:48 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:42:48 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d.
Dec 09 10:42:48 compute-0 podman[224145]: 2025-12-09 10:42:48.401895412 +0000 UTC m=+0.132637469 container init 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=base rhel9, name=ubi9, config_id=edpm)
Dec 09 10:42:48 compute-0 kepler[224161]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 09 10:42:48 compute-0 podman[224145]: 2025-12-09 10:42:48.424236298 +0000 UTC m=+0.154978305 container start 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, vendor=Red Hat, Inc., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Dec 09 10:42:48 compute-0 kepler[224161]: I1209 10:42:48.426637       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec 09 10:42:48 compute-0 kepler[224161]: I1209 10:42:48.426789       1 config.go:293] using gCgroup ID in the BPF program: true
Dec 09 10:42:48 compute-0 kepler[224161]: I1209 10:42:48.426820       1 config.go:295] kernel version: 5.14
Dec 09 10:42:48 compute-0 kepler[224161]: I1209 10:42:48.427335       1 power.go:78] Unable to obtain power, use estimate method
Dec 09 10:42:48 compute-0 podman[224145]: kepler
Dec 09 10:42:48 compute-0 kepler[224161]: I1209 10:42:48.427344       1 redfish.go:169] failed to get redfish credential file path
Dec 09 10:42:48 compute-0 kepler[224161]: I1209 10:42:48.427754       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec 09 10:42:48 compute-0 kepler[224161]: I1209 10:42:48.427824       1 power.go:79] using none to obtain power
Dec 09 10:42:48 compute-0 kepler[224161]: E1209 10:42:48.427838       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec 09 10:42:48 compute-0 kepler[224161]: E1209 10:42:48.427858       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec 09 10:42:48 compute-0 kepler[224161]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 09 10:42:48 compute-0 kepler[224161]: I1209 10:42:48.429291       1 exporter.go:84] Number of CPUs: 8
Dec 09 10:42:48 compute-0 systemd[1]: Started kepler container.
Dec 09 10:42:48 compute-0 sudo[224104]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:48 compute-0 podman[224171]: 2025-12-09 10:42:48.533552494 +0000 UTC m=+0.092234272 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, release=1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vcs-type=git, version=9.4, managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_id=edpm, name=ubi9, maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 09 10:42:48 compute-0 systemd[1]: 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d-7ead5e0f1bade9d9.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 10:42:48 compute-0 systemd[1]: 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d-7ead5e0f1bade9d9.service: Failed with result 'exit-code'.
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.008517       1 watcher.go:83] Using in cluster k8s config
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.008592       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec 09 10:42:49 compute-0 kepler[224161]: E1209 10:42:49.008720       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.016147       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.016212       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.023471       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.023503       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec 09 10:42:49 compute-0 sudo[224346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekzzymtmjfvexdlzoemyhompufqltgoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276968.6682322-513-249542135614220/AnsiballZ_systemd.py'
Dec 09 10:42:49 compute-0 sudo[224346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.035160       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.035221       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.035250       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.048271       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.048323       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.048332       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.048341       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.048350       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.048367       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.049176       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.049253       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.049332       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.049430       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.049675       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec 09 10:42:49 compute-0 kepler[224161]: I1209 10:42:49.050953       1 exporter.go:208] Started Kepler in 624.492393ms
Dec 09 10:42:49 compute-0 python3.9[224356]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:42:49 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec 09 10:42:49 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:49.496 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec 09 10:42:49 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:49.598 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec 09 10:42:49 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:49.599 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec 09 10:42:49 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:49.599 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec 09 10:42:49 compute-0 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:49.613 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec 09 10:42:49 compute-0 systemd[1]: libpod-ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.scope: Deactivated successfully.
Dec 09 10:42:49 compute-0 systemd[1]: libpod-ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.scope: Consumed 2.296s CPU time.
Dec 09 10:42:49 compute-0 podman[224361]: 2025-12-09 10:42:49.794344352 +0000 UTC m=+0.369317211 container died ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 10:42:49 compute-0 systemd[1]: ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92-d49b0d9dbbb7496.timer: Deactivated successfully.
Dec 09 10:42:49 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.
Dec 09 10:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92-userdata-shm.mount: Deactivated successfully.
Dec 09 10:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774-merged.mount: Deactivated successfully.
Dec 09 10:42:49 compute-0 podman[224361]: 2025-12-09 10:42:49.88900168 +0000 UTC m=+0.463974549 container cleanup ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Dec 09 10:42:49 compute-0 podman[224361]: ceilometer_agent_ipmi
Dec 09 10:42:50 compute-0 podman[224386]: ceilometer_agent_ipmi
Dec 09 10:42:50 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec 09 10:42:50 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec 09 10:42:50 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec 09 10:42:50 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 09 10:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec 09 10:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 09 10:42:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 09 10:42:50 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.
Dec 09 10:42:50 compute-0 podman[224396]: 2025-12-09 10:42:50.232668994 +0000 UTC m=+0.183751286 container init ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: + sudo -E kolla_set_configs
Dec 09 10:42:50 compute-0 sudo[224418]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 09 10:42:50 compute-0 sudo[224418]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 09 10:42:50 compute-0 sudo[224418]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 09 10:42:50 compute-0 podman[224396]: 2025-12-09 10:42:50.268694862 +0000 UTC m=+0.219777114 container start ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:42:50 compute-0 podman[224396]: ceilometer_agent_ipmi
Dec 09 10:42:50 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Dec 09 10:42:50 compute-0 sudo[224346]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: INFO:__main__:Validating config file
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: INFO:__main__:Copying service configuration files
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: INFO:__main__:Writing out command to execute
Dec 09 10:42:50 compute-0 sudo[224418]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:50 compute-0 podman[224419]: 2025-12-09 10:42:50.342542975 +0000 UTC m=+0.062987429 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: ++ cat /run_command
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: + ARGS=
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: + sudo kolla_copy_cacerts
Dec 09 10:42:50 compute-0 systemd[1]: ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92-31abf5bbaf1ad868.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 10:42:50 compute-0 systemd[1]: ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92-31abf5bbaf1ad868.service: Failed with result 'exit-code'.
Dec 09 10:42:50 compute-0 sudo[224438]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 09 10:42:50 compute-0 sudo[224438]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 09 10:42:50 compute-0 sudo[224438]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 09 10:42:50 compute-0 sudo[224438]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: + [[ ! -n '' ]]
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: + . kolla_extend_start
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: + umask 0022
Dec 09 10:42:50 compute-0 ceilometer_agent_ipmi[224412]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec 09 10:42:51 compute-0 sudo[224590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpnuoajvhcdqdcpsmvxhchhucruohivg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276970.576901-521-105995391014600/AnsiballZ_systemd.py'
Dec 09 10:42:51 compute-0 sudo[224590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.216 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.216 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.231 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.252 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.254 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.255 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.283 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp_izkm7uc/privsep.sock']
Dec 09 10:42:51 compute-0 sudo[224597]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmp_izkm7uc/privsep.sock
Dec 09 10:42:51 compute-0 sudo[224597]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 09 10:42:51 compute-0 sudo[224597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 09 10:42:51 compute-0 python3.9[224592]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:42:51 compute-0 systemd[1]: Stopping kepler container...
Dec 09 10:42:51 compute-0 kepler[224161]: I1209 10:42:51.533305       1 exporter.go:218] Received shutdown signal
Dec 09 10:42:51 compute-0 kepler[224161]: I1209 10:42:51.534007       1 exporter.go:226] Exiting...
Dec 09 10:42:51 compute-0 systemd[1]: libpod-8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d.scope: Deactivated successfully.
Dec 09 10:42:51 compute-0 podman[224603]: 2025-12-09 10:42:51.759007676 +0000 UTC m=+0.283410980 container died 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, version=9.4, io.openshift.tags=base rhel9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, container_name=kepler, architecture=x86_64, com.redhat.component=ubi9-container)
Dec 09 10:42:51 compute-0 systemd[1]: 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d-7ead5e0f1bade9d9.timer: Deactivated successfully.
Dec 09 10:42:51 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d.
Dec 09 10:42:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d-userdata-shm.mount: Deactivated successfully.
Dec 09 10:42:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4981261eff032724feeb37b979fc07f98c64089f68922d5ec592f23cf06ee21b-merged.mount: Deactivated successfully.
Dec 09 10:42:51 compute-0 podman[224603]: 2025-12-09 10:42:51.809078685 +0000 UTC m=+0.333481969 container cleanup 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9, version=9.4, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_id=edpm)
Dec 09 10:42:51 compute-0 podman[224603]: kepler
Dec 09 10:42:51 compute-0 podman[224630]: kepler
Dec 09 10:42:51 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Dec 09 10:42:51 compute-0 systemd[1]: Stopped kepler container.
Dec 09 10:42:51 compute-0 systemd[1]: Starting kepler container...
Dec 09 10:42:51 compute-0 sudo[224597]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.960 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.961 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp_izkm7uc/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.855 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.860 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.862 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 09 10:42:51 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.862 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec 09 10:42:52 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:42:52 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d.
Dec 09 10:42:52 compute-0 podman[224644]: 2025-12-09 10:42:52.055555462 +0000 UTC m=+0.126454872 container init 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, vcs-type=git, io.openshift.expose-services=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., distribution-scope=public, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.063 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.063 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.064 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.064 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.064 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.064 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.065 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.065 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.065 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.065 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.065 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.065 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.065 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.068 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.068 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.068 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.068 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.068 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.068 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.068 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 kepler[224661]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.083990       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.084118       1 config.go:293] using gCgroup ID in the BPF program: true
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.084136       1 config.go:295] kernel version: 5.14
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.086029       1 power.go:78] Unable to obtain power, use estimate method
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.086055       1 redfish.go:169] failed to get redfish credential file path
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.086450       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.086464       1 power.go:79] using none to obtain power
Dec 09 10:42:52 compute-0 kepler[224661]: E1209 10:42:52.086479       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 kepler[224661]: E1209 10:42:52.086502       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.087 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.087 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.087 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 09 10:42:52 compute-0 kepler[224661]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.088597       1 exporter.go:84] Number of CPUs: 8
Dec 09 10:42:52 compute-0 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.090 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 09 10:42:52 compute-0 podman[224644]: 2025-12-09 10:42:52.092902615 +0000 UTC m=+0.163802025 container start 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, config_id=edpm, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., architecture=x86_64, container_name=kepler, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Dec 09 10:42:52 compute-0 podman[224644]: kepler
Dec 09 10:42:52 compute-0 systemd[1]: Started kepler container.
Dec 09 10:42:52 compute-0 sudo[224590]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:52 compute-0 podman[224672]: 2025-12-09 10:42:52.174732815 +0000 UTC m=+0.072096757 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, version=9.4, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.expose-services=)
Dec 09 10:42:52 compute-0 systemd[1]: 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d-66537a7c068fd481.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 10:42:52 compute-0 systemd[1]: 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d-66537a7c068fd481.service: Failed with result 'exit-code'.
Dec 09 10:42:52 compute-0 sudo[224848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyklilovjibsvjeqdolpvnpnmqoiomrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276972.294242-529-143269299114628/AnsiballZ_find.py'
Dec 09 10:42:52 compute-0 sudo[224848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.657016       1 watcher.go:83] Using in cluster k8s config
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.657106       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec 09 10:42:52 compute-0 kepler[224661]: E1209 10:42:52.657215       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.664698       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.664721       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.672154       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.672208       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.687499       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.687567       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.687590       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.698078       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.698138       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.698148       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.698157       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.698167       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.698184       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.698301       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.698353       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.698392       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.698427       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.698631       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec 09 10:42:52 compute-0 kepler[224661]: I1209 10:42:52.699137       1 exporter.go:208] Started Kepler in 615.421496ms
Dec 09 10:42:52 compute-0 python3.9[224850]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 09 10:42:52 compute-0 sudo[224848]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:53 compute-0 sudo[225010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcztgyednnkirmanfgdqiwmdvcytsixc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276973.2752438-539-88025421614729/AnsiballZ_podman_container_info.py'
Dec 09 10:42:53 compute-0 sudo[225010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:54 compute-0 podman[225012]: 2025-12-09 10:42:54.037484354 +0000 UTC m=+0.101648768 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible)
Dec 09 10:42:54 compute-0 python3.9[225013]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec 09 10:42:54 compute-0 sudo[225010]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:54 compute-0 podman[225126]: 2025-12-09 10:42:54.986559854 +0000 UTC m=+0.122272269 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 09 10:42:55 compute-0 sudo[225212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrkhndhyjiuqcfiooeohrdgrvlfjeihu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276974.5276911-547-40281599312350/AnsiballZ_podman_container_exec.py'
Dec 09 10:42:55 compute-0 sudo[225212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:55 compute-0 python3.9[225214]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:42:55 compute-0 systemd[1]: Started libpod-conmon-e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.scope.
Dec 09 10:42:55 compute-0 podman[225215]: 2025-12-09 10:42:55.494399552 +0000 UTC m=+0.144671495 container exec e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 09 10:42:55 compute-0 podman[225215]: 2025-12-09 10:42:55.509725698 +0000 UTC m=+0.159997661 container exec_died e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:42:55 compute-0 sudo[225212]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:55 compute-0 systemd[1]: libpod-conmon-e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.scope: Deactivated successfully.
Dec 09 10:42:56 compute-0 sudo[225396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqqcyuzzbybqvjiycngazppufklstqwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276975.8040144-555-14588629113352/AnsiballZ_podman_container_exec.py'
Dec 09 10:42:56 compute-0 sudo[225396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:56 compute-0 python3.9[225398]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:42:56 compute-0 systemd[1]: Started libpod-conmon-e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.scope.
Dec 09 10:42:56 compute-0 podman[225399]: 2025-12-09 10:42:56.510245854 +0000 UTC m=+0.131823378 container exec e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 09 10:42:56 compute-0 podman[225399]: 2025-12-09 10:42:56.55467704 +0000 UTC m=+0.176254574 container exec_died e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 09 10:42:56 compute-0 systemd[1]: libpod-conmon-e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.scope: Deactivated successfully.
Dec 09 10:42:56 compute-0 sudo[225396]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:57 compute-0 sudo[225579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfzcavoxwnxydfjkykrtihbryemmsefb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276976.9253895-563-119338747112045/AnsiballZ_file.py'
Dec 09 10:42:57 compute-0 sudo[225579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:57 compute-0 python3.9[225581]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:42:57 compute-0 sudo[225579]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:58 compute-0 sudo[225731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpgzushxnzuxkxkjlxgbnzhlwrfgzaeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276977.8546178-572-92852329281425/AnsiballZ_podman_container_info.py'
Dec 09 10:42:58 compute-0 sudo[225731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:58 compute-0 python3.9[225733]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec 09 10:42:58 compute-0 sudo[225731]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:59 compute-0 sudo[225895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyxhnlyjkifsnevulnggbuloafwxyrgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276978.8372915-580-92716848920391/AnsiballZ_podman_container_exec.py'
Dec 09 10:42:59 compute-0 sudo[225895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:42:59 compute-0 python3.9[225897]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:42:59 compute-0 systemd[1]: Started libpod-conmon-8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.scope.
Dec 09 10:42:59 compute-0 podman[225898]: 2025-12-09 10:42:59.666228839 +0000 UTC m=+0.123103671 container exec 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:42:59 compute-0 podman[225898]: 2025-12-09 10:42:59.698923897 +0000 UTC m=+0.155798759 container exec_died 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:42:59 compute-0 systemd[1]: libpod-conmon-8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.scope: Deactivated successfully.
Dec 09 10:42:59 compute-0 podman[203687]: time="2025-12-09T10:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:42:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec 09 10:42:59 compute-0 sudo[225895]: pam_unix(sudo:session): session closed for user root
Dec 09 10:42:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4261 "" "Go-http-client/1.1"
Dec 09 10:43:00 compute-0 sudo[226094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myxzfsyppptqalfadsawsupupbulbghx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276979.9815073-588-125455095590466/AnsiballZ_podman_container_exec.py'
Dec 09 10:43:00 compute-0 podman[226050]: 2025-12-09 10:43:00.456728237 +0000 UTC m=+0.094756882 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 09 10:43:00 compute-0 sudo[226094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:00 compute-0 python3.9[226098]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:43:00 compute-0 systemd[1]: Started libpod-conmon-8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.scope.
Dec 09 10:43:00 compute-0 podman[226099]: 2025-12-09 10:43:00.816152669 +0000 UTC m=+0.132443834 container exec 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 09 10:43:00 compute-0 podman[226099]: 2025-12-09 10:43:00.850040818 +0000 UTC m=+0.166331893 container exec_died 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:43:00 compute-0 systemd[1]: libpod-conmon-8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.scope: Deactivated successfully.
Dec 09 10:43:00 compute-0 sudo[226094]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:01 compute-0 openstack_network_exporter[205823]: ERROR   10:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:43:01 compute-0 openstack_network_exporter[205823]: ERROR   10:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:43:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:43:01 compute-0 openstack_network_exporter[205823]: ERROR   10:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:43:01 compute-0 openstack_network_exporter[205823]: ERROR   10:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:43:01 compute-0 openstack_network_exporter[205823]: ERROR   10:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:43:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:43:01 compute-0 sudo[226276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owrmllxuqmqysznxlbmtsgbarytgjzmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276981.1294837-596-93657664116995/AnsiballZ_file.py'
Dec 09 10:43:01 compute-0 sudo[226276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:01 compute-0 python3.9[226279]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:01 compute-0 podman[226278]: 2025-12-09 10:43:01.860406321 +0000 UTC m=+0.207673255 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:43:01 compute-0 sudo[226276]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:02 compute-0 sudo[226453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cihhszsdxyidnjbzkojwjfaanyhscdll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276982.1809306-605-103023098784632/AnsiballZ_podman_container_info.py'
Dec 09 10:43:02 compute-0 sudo[226453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:02 compute-0 python3.9[226455]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec 09 10:43:03 compute-0 sudo[226453]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:03 compute-0 sudo[226633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfocouykqglqcqsylllwscxkufvmsyjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276983.2809505-613-16923691252381/AnsiballZ_podman_container_exec.py'
Dec 09 10:43:03 compute-0 podman[226593]: 2025-12-09 10:43:03.799476571 +0000 UTC m=+0.111367733 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 10:43:03 compute-0 sudo[226633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:03 compute-0 sshd-session[226567]: Connection closed by authenticating user daemon 159.223.8.217 port 40258 [preauth]
Dec 09 10:43:04 compute-0 python3.9[226644]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:43:04 compute-0 systemd[1]: Started libpod-conmon-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope.
Dec 09 10:43:04 compute-0 podman[226645]: 2025-12-09 10:43:04.177865177 +0000 UTC m=+0.130339558 container exec 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 09 10:43:04 compute-0 podman[226645]: 2025-12-09 10:43:04.211942031 +0000 UTC m=+0.164416382 container exec_died 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:43:04 compute-0 sudo[226633]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:04 compute-0 systemd[1]: libpod-conmon-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope: Deactivated successfully.
Dec 09 10:43:05 compute-0 sudo[226823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czyltbzviuweywcwfomhztctooryhzvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276984.5244796-621-221791519925865/AnsiballZ_podman_container_exec.py'
Dec 09 10:43:05 compute-0 sudo[226823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:05 compute-0 python3.9[226825]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:43:05 compute-0 systemd[1]: Started libpod-conmon-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope.
Dec 09 10:43:05 compute-0 podman[226826]: 2025-12-09 10:43:05.414703853 +0000 UTC m=+0.194621830 container exec 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 09 10:43:05 compute-0 podman[226826]: 2025-12-09 10:43:05.422571157 +0000 UTC m=+0.202489164 container exec_died 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202)
Dec 09 10:43:05 compute-0 sudo[226823]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:05 compute-0 systemd[1]: libpod-conmon-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope: Deactivated successfully.
Dec 09 10:43:06 compute-0 sudo[227005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbtyqacopujhqmqvhehfwzrjgmnazghw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276985.718812-629-48624061993275/AnsiballZ_file.py'
Dec 09 10:43:06 compute-0 sudo[227005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:06 compute-0 python3.9[227007]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:06 compute-0 sudo[227005]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:07 compute-0 sudo[227157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfyapujqhytwucubcyrzsjotflzxpguo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276986.6772606-638-59594195573380/AnsiballZ_podman_container_info.py'
Dec 09 10:43:07 compute-0 sudo[227157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:07 compute-0 python3.9[227159]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec 09 10:43:07 compute-0 sudo[227157]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:08 compute-0 sudo[227322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmtpnkwppwrjxvfxzefepgljabnuvbsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276987.7565966-646-253328281195152/AnsiballZ_podman_container_exec.py'
Dec 09 10:43:08 compute-0 sudo[227322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:08 compute-0 python3.9[227324]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:43:08 compute-0 systemd[1]: Started libpod-conmon-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope.
Dec 09 10:43:08 compute-0 podman[227325]: 2025-12-09 10:43:08.556137766 +0000 UTC m=+0.097523027 container exec b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 09 10:43:08 compute-0 podman[227325]: 2025-12-09 10:43:08.563054353 +0000 UTC m=+0.104439604 container exec_died b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec 09 10:43:08 compute-0 sudo[227322]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:08 compute-0 systemd[1]: libpod-conmon-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope: Deactivated successfully.
Dec 09 10:43:09 compute-0 sudo[227504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbrkifazxleupgwlusonwenbjedhhhpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276988.8953495-654-26679197858935/AnsiballZ_podman_container_exec.py'
Dec 09 10:43:09 compute-0 sudo[227504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:09 compute-0 python3.9[227506]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:43:09 compute-0 systemd[1]: Started libpod-conmon-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope.
Dec 09 10:43:09 compute-0 podman[227507]: 2025-12-09 10:43:09.657650262 +0000 UTC m=+0.105373781 container exec b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec 09 10:43:09 compute-0 podman[227507]: 2025-12-09 10:43:09.690829912 +0000 UTC m=+0.138553341 container exec_died b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:43:09 compute-0 systemd[1]: libpod-conmon-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope: Deactivated successfully.
Dec 09 10:43:09 compute-0 sudo[227504]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:10 compute-0 sudo[227687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enjkalqekvcmyolbigpjfhyiibsegslg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276989.9981768-662-40821940020219/AnsiballZ_file.py'
Dec 09 10:43:10 compute-0 sudo[227687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:10 compute-0 python3.9[227689]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:10 compute-0 sudo[227687]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:11 compute-0 sudo[227839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cebcepnpghdkhsjwqigyfeekhpajepua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276990.9480112-671-173167308934591/AnsiballZ_podman_container_info.py'
Dec 09 10:43:11 compute-0 sudo[227839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:11 compute-0 python3.9[227841]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec 09 10:43:11 compute-0 sudo[227839]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:12 compute-0 sudo[228013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-advznbwpnqupeuoipqqbcmifxgbaielm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276991.9971416-679-253759671975099/AnsiballZ_podman_container_exec.py'
Dec 09 10:43:12 compute-0 sudo[228013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:12 compute-0 podman[227978]: 2025-12-09 10:43:12.461391541 +0000 UTC m=+0.086912299 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:43:12 compute-0 python3.9[228025]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:43:12 compute-0 systemd[1]: Started libpod-conmon-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope.
Dec 09 10:43:12 compute-0 podman[228027]: 2025-12-09 10:43:12.836394836 +0000 UTC m=+0.161841382 container exec d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 10:43:12 compute-0 podman[228027]: 2025-12-09 10:43:12.869482344 +0000 UTC m=+0.194928880 container exec_died d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 10:43:12 compute-0 systemd[1]: libpod-conmon-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope: Deactivated successfully.
Dec 09 10:43:12 compute-0 sudo[228013]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:13 compute-0 sudo[228208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzwvmvvwtyiptejbzeflphppkfnhocfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276993.1600993-687-23462771344152/AnsiballZ_podman_container_exec.py'
Dec 09 10:43:13 compute-0 sudo[228208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:13 compute-0 python3.9[228210]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:43:14 compute-0 systemd[1]: Started libpod-conmon-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope.
Dec 09 10:43:14 compute-0 podman[228211]: 2025-12-09 10:43:14.483519525 +0000 UTC m=+0.556491930 container exec d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 10:43:14 compute-0 podman[228211]: 2025-12-09 10:43:14.700576523 +0000 UTC m=+0.773548948 container exec_died d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 10:43:14 compute-0 sudo[228208]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:14 compute-0 systemd[1]: libpod-conmon-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope: Deactivated successfully.
Dec 09 10:43:15 compute-0 sudo[228391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uekhongmfrmpsqzmtharbsnytaaihtuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276994.990682-695-256102235703203/AnsiballZ_file.py'
Dec 09 10:43:15 compute-0 sudo[228391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:15 compute-0 python3.9[228393]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:15 compute-0 sudo[228391]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:16 compute-0 sudo[228557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqwzvbfabqpwueaaqvsbnviwnwkqawzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276995.9578555-704-141736390908924/AnsiballZ_podman_container_info.py'
Dec 09 10:43:16 compute-0 sudo[228557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:16 compute-0 podman[228517]: 2025-12-09 10:43:16.433090769 +0000 UTC m=+0.107735364 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 09 10:43:16 compute-0 python3.9[228563]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec 09 10:43:16 compute-0 sudo[228557]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:43:16.971 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:43:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:43:16.971 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:43:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:43:16.972 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:43:17 compute-0 sudo[228734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtofompmaphaprdrjyejeikxzwadnpgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276996.901245-712-169179560670877/AnsiballZ_podman_container_exec.py'
Dec 09 10:43:17 compute-0 sudo[228734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:17 compute-0 python3.9[228736]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:43:17 compute-0 systemd[1]: Started libpod-conmon-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope.
Dec 09 10:43:17 compute-0 podman[228737]: 2025-12-09 10:43:17.777520046 +0000 UTC m=+0.127107830 container exec 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 10:43:17 compute-0 podman[228737]: 2025-12-09 10:43:17.814300794 +0000 UTC m=+0.163888568 container exec_died 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 09 10:43:17 compute-0 nova_compute[189493]: 2025-12-09 10:43:17.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:43:17 compute-0 systemd[1]: libpod-conmon-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope: Deactivated successfully.
Dec 09 10:43:17 compute-0 sudo[228734]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:18 compute-0 sudo[228915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcauxycpnnxopttfibfxidavazmlhemm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765276998.1589272-720-135013460582949/AnsiballZ_podman_container_exec.py'
Dec 09 10:43:18 compute-0 sudo[228915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:18 compute-0 python3.9[228917]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:43:18 compute-0 nova_compute[189493]: 2025-12-09 10:43:18.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:43:18 compute-0 nova_compute[189493]: 2025-12-09 10:43:18.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:43:18 compute-0 nova_compute[189493]: 2025-12-09 10:43:18.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:43:19 compute-0 systemd[1]: Started libpod-conmon-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope.
Dec 09 10:43:19 compute-0 podman[228918]: 2025-12-09 10:43:19.828616315 +0000 UTC m=+1.042881057 container exec 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 09 10:43:19 compute-0 nova_compute[189493]: 2025-12-09 10:43:19.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:43:19 compute-0 nova_compute[189493]: 2025-12-09 10:43:19.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:43:19 compute-0 nova_compute[189493]: 2025-12-09 10:43:19.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 10:43:19 compute-0 nova_compute[189493]: 2025-12-09 10:43:19.861 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 10:43:19 compute-0 nova_compute[189493]: 2025-12-09 10:43:19.862 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:43:19 compute-0 podman[228918]: 2025-12-09 10:43:19.865124146 +0000 UTC m=+1.079388898 container exec_died 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 09 10:43:19 compute-0 nova_compute[189493]: 2025-12-09 10:43:19.867 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:43:19 compute-0 systemd[1]: libpod-conmon-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope: Deactivated successfully.
Dec 09 10:43:19 compute-0 sudo[228915]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:20 compute-0 podman[229070]: 2025-12-09 10:43:20.492721483 +0000 UTC m=+0.061701445 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec 09 10:43:20 compute-0 sudo[229114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omnozdidvoufhiuadgowkgnyodmdxlvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277000.1415381-728-26277296278944/AnsiballZ_file.py'
Dec 09 10:43:20 compute-0 systemd[1]: ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92-31abf5bbaf1ad868.service: Main process exited, code=exited, status=1/FAILURE
Dec 09 10:43:20 compute-0 systemd[1]: ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92-31abf5bbaf1ad868.service: Failed with result 'exit-code'.
Dec 09 10:43:20 compute-0 sudo[229114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:20 compute-0 python3.9[229116]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:20 compute-0 sudo[229114]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:20 compute-0 nova_compute[189493]: 2025-12-09 10:43:20.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:43:20 compute-0 nova_compute[189493]: 2025-12-09 10:43:20.857 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:43:20 compute-0 nova_compute[189493]: 2025-12-09 10:43:20.857 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:43:20 compute-0 nova_compute[189493]: 2025-12-09 10:43:20.858 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:43:20 compute-0 nova_compute[189493]: 2025-12-09 10:43:20.887 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:43:20 compute-0 nova_compute[189493]: 2025-12-09 10:43:20.887 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:43:20 compute-0 nova_compute[189493]: 2025-12-09 10:43:20.888 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:43:20 compute-0 nova_compute[189493]: 2025-12-09 10:43:20.888 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:43:21 compute-0 nova_compute[189493]: 2025-12-09 10:43:21.366 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:43:21 compute-0 nova_compute[189493]: 2025-12-09 10:43:21.367 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5658MB free_disk=72.23981094360352GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:43:21 compute-0 nova_compute[189493]: 2025-12-09 10:43:21.367 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:43:21 compute-0 nova_compute[189493]: 2025-12-09 10:43:21.368 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:43:21 compute-0 nova_compute[189493]: 2025-12-09 10:43:21.467 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:43:21 compute-0 nova_compute[189493]: 2025-12-09 10:43:21.467 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:43:21 compute-0 sudo[229266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbddszjkoozzyddgdjgyfeyyukwzugky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277001.0436006-737-208072996130266/AnsiballZ_podman_container_info.py'
Dec 09 10:43:21 compute-0 nova_compute[189493]: 2025-12-09 10:43:21.492 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:43:21 compute-0 sudo[229266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:21 compute-0 nova_compute[189493]: 2025-12-09 10:43:21.509 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:43:21 compute-0 nova_compute[189493]: 2025-12-09 10:43:21.511 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:43:21 compute-0 nova_compute[189493]: 2025-12-09 10:43:21.511 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:43:21 compute-0 python3.9[229268]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec 09 10:43:21 compute-0 sudo[229266]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:22 compute-0 sudo[229445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqhiageneetbsodhdwduxfgeoiytffji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277002.118132-745-39782695208690/AnsiballZ_podman_container_exec.py'
Dec 09 10:43:22 compute-0 sudo[229445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:22 compute-0 podman[229404]: 2025-12-09 10:43:22.896639875 +0000 UTC m=+0.134440959 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, release-0.7.12=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, release=1214.1726694543, architecture=x86_64, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Dec 09 10:43:23 compute-0 python3.9[229451]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:43:23 compute-0 systemd[1]: Started libpod-conmon-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope.
Dec 09 10:43:23 compute-0 podman[229453]: 2025-12-09 10:43:23.224297975 +0000 UTC m=+0.135695163 container exec 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 09 10:43:23 compute-0 podman[229472]: 2025-12-09 10:43:23.346919342 +0000 UTC m=+0.089667894 container exec_died 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1755695350, managed_by=edpm_ansible, name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 09 10:43:23 compute-0 podman[229453]: 2025-12-09 10:43:23.380088581 +0000 UTC m=+0.291485779 container exec_died 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6, distribution-scope=public, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64)
Dec 09 10:43:23 compute-0 systemd[1]: libpod-conmon-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope: Deactivated successfully.
Dec 09 10:43:23 compute-0 sudo[229445]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:24 compute-0 podman[229606]: 2025-12-09 10:43:24.209377472 +0000 UTC m=+0.089156800 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 09 10:43:24 compute-0 sudo[229648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bptfocccilollmarpwziaiwixtzsmrer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277003.7391016-753-224450765935514/AnsiballZ_podman_container_exec.py'
Dec 09 10:43:24 compute-0 sudo[229648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:24 compute-0 python3.9[229653]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:43:24 compute-0 systemd[1]: Started libpod-conmon-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope.
Dec 09 10:43:24 compute-0 podman[229655]: 2025-12-09 10:43:24.551573656 +0000 UTC m=+0.107058526 container exec 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Dec 09 10:43:24 compute-0 podman[229655]: 2025-12-09 10:43:24.649598286 +0000 UTC m=+0.205083106 container exec_died 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, release=1755695350)
Dec 09 10:43:24 compute-0 systemd[1]: libpod-conmon-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope: Deactivated successfully.
Dec 09 10:43:24 compute-0 sudo[229648]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:25 compute-0 podman[229806]: 2025-12-09 10:43:25.463738904 +0000 UTC m=+0.073884115 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 09 10:43:25 compute-0 sudo[229850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-togmedqdxheggbtfatkbmrcgqyorxofx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277005.099289-761-21786806129316/AnsiballZ_file.py'
Dec 09 10:43:25 compute-0 sudo[229850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:25 compute-0 python3.9[229852]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:25 compute-0 sudo[229850]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:26 compute-0 sudo[230002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjyeaskrgrizhrirzyhvnqaanpribljs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277006.0472856-770-131244920896902/AnsiballZ_podman_container_info.py'
Dec 09 10:43:26 compute-0 sudo[230002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:26 compute-0 python3.9[230004]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Dec 09 10:43:26 compute-0 sudo[230002]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:27 compute-0 sudo[230166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzjplsosyvkjldtvwzttpauahmdimhua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277007.0673196-778-162193319883555/AnsiballZ_podman_container_exec.py'
Dec 09 10:43:27 compute-0 sudo[230166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:27 compute-0 python3.9[230168]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:43:28 compute-0 systemd[1]: Started libpod-conmon-ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.scope.
Dec 09 10:43:28 compute-0 podman[230169]: 2025-12-09 10:43:28.092697152 +0000 UTC m=+0.195772363 container exec ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:43:28 compute-0 podman[230169]: 2025-12-09 10:43:28.125343587 +0000 UTC m=+0.228418818 container exec_died ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 10:43:28 compute-0 sudo[230166]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:28 compute-0 systemd[1]: libpod-conmon-ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.scope: Deactivated successfully.
Dec 09 10:43:28 compute-0 sudo[230348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofqggcwdhostgtovzohukzpzbfvzjdnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277008.5332122-786-84434115176696/AnsiballZ_podman_container_exec.py'
Dec 09 10:43:28 compute-0 sudo[230348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:29 compute-0 python3.9[230350]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:43:29 compute-0 systemd[1]: Started libpod-conmon-ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.scope.
Dec 09 10:43:29 compute-0 podman[230351]: 2025-12-09 10:43:29.465859218 +0000 UTC m=+0.231229076 container exec ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ceilometer_agent_ipmi, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:43:29 compute-0 podman[230351]: 2025-12-09 10:43:29.714291028 +0000 UTC m=+0.479660866 container exec_died ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:43:29 compute-0 podman[203687]: time="2025-12-09T10:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:43:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec 09 10:43:29 compute-0 sudo[230348]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4252 "" "Go-http-client/1.1"
Dec 09 10:43:29 compute-0 systemd[1]: libpod-conmon-ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.scope: Deactivated successfully.
Dec 09 10:43:30 compute-0 sudo[230530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asamueiwjyizwozvyqjfgsccfghswhzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277010.1092668-794-50550432531765/AnsiballZ_file.py'
Dec 09 10:43:30 compute-0 sudo[230530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:30 compute-0 podman[230532]: 2025-12-09 10:43:30.649957504 +0000 UTC m=+0.086363364 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, maintainer=Red Hat, Inc., vcs-type=git, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, release=1755695350, version=9.6)
Dec 09 10:43:30 compute-0 python3.9[230533]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:30 compute-0 sudo[230530]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:31 compute-0 openstack_network_exporter[205823]: ERROR   10:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:43:31 compute-0 openstack_network_exporter[205823]: ERROR   10:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:43:31 compute-0 openstack_network_exporter[205823]: ERROR   10:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:43:31 compute-0 openstack_network_exporter[205823]: ERROR   10:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:43:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:43:31 compute-0 openstack_network_exporter[205823]: ERROR   10:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:43:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:43:31 compute-0 sudo[230702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nexqpafeihuceqjnckfoxotbhgvcrjwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277011.0522609-803-2731656112153/AnsiballZ_podman_container_info.py'
Dec 09 10:43:31 compute-0 sudo[230702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:31 compute-0 python3.9[230704]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Dec 09 10:43:31 compute-0 sudo[230702]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:32 compute-0 sudo[230884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsbtstrjnjprqctiyygfibxvqpcmadqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277012.2461126-811-80782354412611/AnsiballZ_podman_container_exec.py'
Dec 09 10:43:32 compute-0 sudo[230884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:32 compute-0 podman[230841]: 2025-12-09 10:43:32.764835724 +0000 UTC m=+0.153853156 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 09 10:43:32 compute-0 python3.9[230889]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:43:33 compute-0 systemd[1]: Started libpod-conmon-8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d.scope.
Dec 09 10:43:33 compute-0 podman[230896]: 2025-12-09 10:43:33.060603378 +0000 UTC m=+0.097195619 container exec 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-type=git)
Dec 09 10:43:33 compute-0 podman[230896]: 2025-12-09 10:43:33.094324273 +0000 UTC m=+0.130916544 container exec_died 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, version=9.4, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., name=ubi9, config_id=edpm, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.buildah.version=1.29.0)
Dec 09 10:43:33 compute-0 systemd[1]: libpod-conmon-8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d.scope: Deactivated successfully.
Dec 09 10:43:33 compute-0 sudo[230884]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:33 compute-0 sudo[231077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plehmdhtwjyequqogfuvwzjwaxzysqkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277013.3870225-819-154095239217404/AnsiballZ_podman_container_exec.py'
Dec 09 10:43:33 compute-0 sudo[231077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:33 compute-0 podman[231080]: 2025-12-09 10:43:33.938496516 +0000 UTC m=+0.083365582 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 10:43:33 compute-0 python3.9[231079]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 09 10:43:34 compute-0 systemd[1]: Started libpod-conmon-8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d.scope.
Dec 09 10:43:34 compute-0 podman[231104]: 2025-12-09 10:43:34.065959335 +0000 UTC m=+0.077938756 container exec 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, version=9.4, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=)
Dec 09 10:43:34 compute-0 podman[231104]: 2025-12-09 10:43:34.096183944 +0000 UTC m=+0.108163345 container exec_died 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, distribution-scope=public, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release=1214.1726694543, release-0.7.12=)
Dec 09 10:43:34 compute-0 systemd[1]: libpod-conmon-8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d.scope: Deactivated successfully.
Dec 09 10:43:34 compute-0 sudo[231077]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:34 compute-0 sudo[231286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfugqpoqrtahejozlnmugausbgpxmtpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277014.3696551-827-72959978755330/AnsiballZ_file.py'
Dec 09 10:43:34 compute-0 sudo[231286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:34 compute-0 sshd-session[231229]: Connection closed by authenticating user daemon 159.223.8.217 port 51740 [preauth]
Dec 09 10:43:35 compute-0 python3.9[231288]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:35 compute-0 sudo[231286]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:35 compute-0 sudo[231439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-saduzyfjpylmtzujefekxtgfdgjdlskv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277015.3474762-836-28247169983814/AnsiballZ_file.py'
Dec 09 10:43:35 compute-0 sudo[231439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:36 compute-0 python3.9[231441]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:36 compute-0 sudo[231439]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:36 compute-0 sudo[231591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfnmoyguidtsfqgvqmndqsxkghkbbjfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277016.396287-844-163438373451777/AnsiballZ_stat.py'
Dec 09 10:43:36 compute-0 sudo[231591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:36 compute-0 python3.9[231593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:43:36 compute-0 sudo[231591]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:37 compute-0 sudo[231714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xztaklmixejqjwikimthclwzreuxnfqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277016.396287-844-163438373451777/AnsiballZ_copy.py'
Dec 09 10:43:37 compute-0 sudo[231714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:37 compute-0 python3.9[231716]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765277016.396287-844-163438373451777/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:37 compute-0 sudo[231714]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:38 compute-0 sudo[231866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyxpwjgslvadlyxszsxxonmufmrrssef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277018.0689428-860-116942140087504/AnsiballZ_file.py'
Dec 09 10:43:38 compute-0 sudo[231866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:38 compute-0 python3.9[231868]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:38 compute-0 sudo[231866]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:38 compute-0 sshd-session[231096]: Connection reset by 147.185.132.126 port 61466 [preauth]
Dec 09 10:43:39 compute-0 sudo[232018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylproispzgbazivdgbwehygpmugketgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277018.939693-868-95374127426925/AnsiballZ_stat.py'
Dec 09 10:43:39 compute-0 sudo[232018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:39 compute-0 python3.9[232020]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:43:39 compute-0 sudo[232018]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:40 compute-0 sudo[232096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhawonxfchrbefwdctqkngjnpfsifnro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277018.939693-868-95374127426925/AnsiballZ_file.py'
Dec 09 10:43:40 compute-0 sudo[232096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:40 compute-0 python3.9[232098]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:40 compute-0 sudo[232096]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:40 compute-0 sudo[232248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkzqpfqaviyvrxblonovnzpmmlyayfgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277020.5186458-880-126343743879171/AnsiballZ_stat.py'
Dec 09 10:43:40 compute-0 sudo[232248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:41 compute-0 python3.9[232250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:43:41 compute-0 sudo[232248]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:41 compute-0 sudo[232326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfhdzhlkkgzftpaeqtxvlmlxybtgqdkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277020.5186458-880-126343743879171/AnsiballZ_file.py'
Dec 09 10:43:41 compute-0 sudo[232326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:41 compute-0 python3.9[232328]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.47_zv256 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:41 compute-0 sudo[232326]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:42 compute-0 sudo[232478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elosvgcljojwtvzulgcwizxchfqlpxwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277022.0191545-892-59703394514499/AnsiballZ_stat.py'
Dec 09 10:43:42 compute-0 sudo[232478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:42 compute-0 python3.9[232480]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:43:42 compute-0 sudo[232478]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:42 compute-0 podman[232524]: 2025-12-09 10:43:42.915238766 +0000 UTC m=+0.072838457 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:43:42 compute-0 sudo[232573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xelggphesocdcwuixfcvghogjflnlvhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277022.0191545-892-59703394514499/AnsiballZ_file.py'
Dec 09 10:43:42 compute-0 sudo[232573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:43 compute-0 python3.9[232575]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:43 compute-0 sudo[232573]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:43 compute-0 sudo[232725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiedxhxcpmyvuakhqpyyanmqsyweafiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277023.3931346-905-202371340584377/AnsiballZ_command.py'
Dec 09 10:43:43 compute-0 sudo[232725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:44 compute-0 python3.9[232727]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:43:44 compute-0 sudo[232725]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:44 compute-0 sudo[232878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itnqaggoukjjxjfjctlhajdfworanehl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765277024.3974252-913-12737131348239/AnsiballZ_edpm_nftables_from_files.py'
Dec 09 10:43:44 compute-0 sudo[232878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:45 compute-0 python3[232880]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 09 10:43:45 compute-0 sudo[232878]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:45 compute-0 sudo[233030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrewjshfhmzsadmzmhaxawqzfusrisba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277025.5012727-921-170772474037404/AnsiballZ_stat.py'
Dec 09 10:43:45 compute-0 sudo[233030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:46 compute-0 python3.9[233032]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:43:46 compute-0 sudo[233030]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:46 compute-0 sudo[233108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhwpgjxzdjitazusgtrvopppkeurfolu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277025.5012727-921-170772474037404/AnsiballZ_file.py'
Dec 09 10:43:46 compute-0 sudo[233108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:46 compute-0 podman[233110]: 2025-12-09 10:43:46.623137156 +0000 UTC m=+0.063087393 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 10:43:46 compute-0 python3.9[233111]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:46 compute-0 sudo[233108]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:47 compute-0 sudo[233283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiqwvqmguctgzmywcslvjvdvkbiaqvvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277026.9712963-933-182839190387567/AnsiballZ_stat.py'
Dec 09 10:43:47 compute-0 sudo[233283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:47 compute-0 python3.9[233285]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:43:47 compute-0 sudo[233283]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:48 compute-0 sudo[233361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mclxonmjssmyiobigyxpgqapghcuunrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277026.9712963-933-182839190387567/AnsiballZ_file.py'
Dec 09 10:43:48 compute-0 sudo[233361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:48 compute-0 python3.9[233363]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:48 compute-0 sudo[233361]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:48 compute-0 sudo[233513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrozmsxhqqeqicdejrzzvmcmlptolxnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277028.5288541-945-39764899034562/AnsiballZ_stat.py'
Dec 09 10:43:48 compute-0 sudo[233513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:49 compute-0 python3.9[233515]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:43:49 compute-0 sudo[233513]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:49 compute-0 sudo[233592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzzsygzvnzichaxgjncttsxkfyamfljg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277028.5288541-945-39764899034562/AnsiballZ_file.py'
Dec 09 10:43:49 compute-0 sudo[233592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:49 compute-0 python3.9[233594]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:49 compute-0 sudo[233592]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:50 compute-0 sudo[233744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diruuxkysetwrlabrwpsucspzpdcwziu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277030.157985-957-113881233023661/AnsiballZ_stat.py'
Dec 09 10:43:50 compute-0 sudo[233744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:50 compute-0 podman[233746]: 2025-12-09 10:43:50.648542737 +0000 UTC m=+0.086085071 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm)
Dec 09 10:43:50 compute-0 python3.9[233747]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:43:50 compute-0 sudo[233744]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:51 compute-0 sudo[233842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyglrvdameqlnlnvszorchepqgdyuijm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277030.157985-957-113881233023661/AnsiballZ_file.py'
Dec 09 10:43:51 compute-0 sudo[233842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:51 compute-0 python3.9[233844]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:51 compute-0 sudo[233842]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:52 compute-0 sudo[233994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otisqrtseubmikliehvbknveaytwziud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277031.7062-969-139715743232838/AnsiballZ_stat.py'
Dec 09 10:43:52 compute-0 sudo[233994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:52 compute-0 python3.9[233996]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:43:52 compute-0 sudo[233994]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:53 compute-0 sudo[234132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-narfppomvyypyjmphgrdvmsyuoxwiolr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277031.7062-969-139715743232838/AnsiballZ_copy.py'
Dec 09 10:43:53 compute-0 sudo[234132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:53 compute-0 podman[234093]: 2025-12-09 10:43:53.187197077 +0000 UTC m=+0.091472429 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, io.openshift.expose-services=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_id=edpm, container_name=kepler, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 09 10:43:53 compute-0 python3.9[234139]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765277031.7062-969-139715743232838/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:53 compute-0 sudo[234132]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:54 compute-0 sudo[234289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fphlhdxfjjdimbgpxzarfkxhyuzyzbgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277033.6425471-984-250966958395892/AnsiballZ_file.py'
Dec 09 10:43:54 compute-0 sudo[234289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:54 compute-0 python3.9[234291]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:54 compute-0 sudo[234289]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:54 compute-0 podman[234381]: 2025-12-09 10:43:54.979234812 +0000 UTC m=+0.116551723 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4)
Dec 09 10:43:55 compute-0 sudo[234461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cesmfdtoubufgnoslmlldpvcssrshxgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277034.637171-992-142629750722516/AnsiballZ_command.py'
Dec 09 10:43:55 compute-0 sudo[234461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:55 compute-0 python3.9[234463]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:43:55 compute-0 sudo[234461]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:55 compute-0 podman[234543]: 2025-12-09 10:43:55.920058828 +0000 UTC m=+0.077243729 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 09 10:43:56 compute-0 sudo[234634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbatmlmxphjdoiaheqocttwzgrfdajpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277035.5864325-1000-98502183249872/AnsiballZ_blockinfile.py'
Dec 09 10:43:56 compute-0 sudo[234634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:56 compute-0 python3.9[234636]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:43:56 compute-0 sudo[234634]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:57 compute-0 sudo[234786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzgqgmteistiskshalwkdqajsespsbio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277036.796596-1009-94321639191653/AnsiballZ_command.py'
Dec 09 10:43:57 compute-0 sudo[234786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:57 compute-0 python3.9[234788]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:43:57 compute-0 sudo[234786]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:58 compute-0 sudo[234939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poideahduveuqovolwprqxkkqeexirih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277037.7475593-1017-206835756993957/AnsiballZ_stat.py'
Dec 09 10:43:58 compute-0 sudo[234939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:58 compute-0 python3.9[234941]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 09 10:43:58 compute-0 sudo[234939]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:59 compute-0 sudo[235093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixvzqsbjohbacgxiogotwcifkuebbycf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277038.5683513-1025-205239974017642/AnsiballZ_command.py'
Dec 09 10:43:59 compute-0 sudo[235093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:43:59 compute-0 python3.9[235095]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:43:59 compute-0 sudo[235093]: pam_unix(sudo:session): session closed for user root
Dec 09 10:43:59 compute-0 podman[203687]: time="2025-12-09T10:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:43:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 10:43:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4255 "" "Go-http-client/1.1"
Dec 09 10:43:59 compute-0 sudo[235248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjvqugccsnigwtwaoryrwpwumcdotrsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277039.5882685-1033-70089348974565/AnsiballZ_file.py'
Dec 09 10:43:59 compute-0 sudo[235248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:44:00 compute-0 python3.9[235250]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:44:00 compute-0 sudo[235248]: pam_unix(sudo:session): session closed for user root
Dec 09 10:44:00 compute-0 sshd-session[214912]: Connection closed by 192.168.122.30 port 39610
Dec 09 10:44:00 compute-0 sshd-session[214909]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:44:00 compute-0 systemd-logind[806]: Session 27 logged out. Waiting for processes to exit.
Dec 09 10:44:00 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Dec 09 10:44:00 compute-0 systemd[1]: session-27.scope: Consumed 1min 38.624s CPU time.
Dec 09 10:44:00 compute-0 systemd-logind[806]: Removed session 27.
Dec 09 10:44:01 compute-0 podman[235275]: 2025-12-09 10:44:01.001295804 +0000 UTC m=+0.149740609 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Dec 09 10:44:01 compute-0 openstack_network_exporter[205823]: ERROR   10:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:44:01 compute-0 openstack_network_exporter[205823]: ERROR   10:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:44:01 compute-0 openstack_network_exporter[205823]: ERROR   10:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:44:01 compute-0 openstack_network_exporter[205823]: ERROR   10:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:44:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:44:01 compute-0 openstack_network_exporter[205823]: ERROR   10:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:44:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:44:02 compute-0 podman[235296]: 2025-12-09 10:44:02.948687671 +0000 UTC m=+0.100480024 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 09 10:44:04 compute-0 podman[235321]: 2025-12-09 10:44:04.969138403 +0000 UTC m=+0.121390444 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 10:44:06 compute-0 sshd-session[235343]: Connection closed by authenticating user daemon 159.223.8.217 port 41456 [preauth]
Dec 09 10:44:07 compute-0 sshd-session[235345]: Accepted publickey for zuul from 192.168.122.30 port 54958 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 10:44:07 compute-0 systemd-logind[806]: New session 28 of user zuul.
Dec 09 10:44:07 compute-0 systemd[1]: Started Session 28 of User zuul.
Dec 09 10:44:07 compute-0 sshd-session[235345]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:44:08 compute-0 python3.9[235498]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:44:09 compute-0 sudo[235652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlwmneerjyvtrmqmletwuzsgzsxvveiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277049.028385-34-4803123692229/AnsiballZ_systemd.py'
Dec 09 10:44:09 compute-0 sudo[235652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:44:10 compute-0 python3.9[235654]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Dec 09 10:44:10 compute-0 sudo[235652]: pam_unix(sudo:session): session closed for user root
Dec 09 10:44:11 compute-0 sudo[235807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvljascjtdxojsqkldlqodkieijwmpgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277050.576631-42-254604069197080/AnsiballZ_setup.py'
Dec 09 10:44:11 compute-0 sudo[235807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:44:11 compute-0 python3.9[235809]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 09 10:44:11 compute-0 sudo[235807]: pam_unix(sudo:session): session closed for user root
Dec 09 10:44:11 compute-0 sshd-session[235800]: Received disconnect from 80.94.93.119 port 23936:11:  [preauth]
Dec 09 10:44:11 compute-0 sshd-session[235800]: Disconnected from authenticating user root 80.94.93.119 port 23936 [preauth]
Dec 09 10:44:12 compute-0 sudo[235891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tseludgujllpxvpkopmwqsnafjyyuxtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277050.576631-42-254604069197080/AnsiballZ_dnf.py'
Dec 09 10:44:12 compute-0 sudo[235891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:44:12 compute-0 python3.9[235893]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 09 10:44:13 compute-0 podman[235896]: 2025-12-09 10:44:13.953688006 +0000 UTC m=+0.105083720 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:44:15 compute-0 sudo[235891]: pam_unix(sudo:session): session closed for user root
Dec 09 10:44:16 compute-0 sudo[236069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyukwdlckmhscrgdakqdmwznqznpfwll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277055.919134-54-144602057052743/AnsiballZ_stat.py'
Dec 09 10:44:16 compute-0 sudo[236069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:44:16 compute-0 python3.9[236071]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:44:16 compute-0 sudo[236069]: pam_unix(sudo:session): session closed for user root
Dec 09 10:44:16 compute-0 nova_compute[189493]: 2025-12-09 10:44:16.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:44:16 compute-0 nova_compute[189493]: 2025-12-09 10:44:16.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 09 10:44:16 compute-0 nova_compute[189493]: 2025-12-09 10:44:16.870 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 09 10:44:16 compute-0 nova_compute[189493]: 2025-12-09 10:44:16.871 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:44:16 compute-0 nova_compute[189493]: 2025-12-09 10:44:16.871 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 09 10:44:16 compute-0 nova_compute[189493]: 2025-12-09 10:44:16.887 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:44:16 compute-0 podman[236092]: 2025-12-09 10:44:16.94045213 +0000 UTC m=+0.095586541 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 10:44:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:44:16.972 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:44:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:44:16.973 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:44:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:44:16.973 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:44:17 compute-0 sudo[236215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykawtwuzsokdsmdtrkjwojaqoelfcrha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277055.919134-54-144602057052743/AnsiballZ_copy.py'
Dec 09 10:44:17 compute-0 sudo[236215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:44:17 compute-0 python3.9[236217]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765277055.919134-54-144602057052743/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:44:17 compute-0 sudo[236215]: pam_unix(sudo:session): session closed for user root
Dec 09 10:44:18 compute-0 sudo[236367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqbazbrkmmbglngwamsozufxyteizogk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277058.0999472-69-40007358261668/AnsiballZ_file.py'
Dec 09 10:44:18 compute-0 sudo[236367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:44:19 compute-0 python3.9[236369]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:44:19 compute-0 nova_compute[189493]: 2025-12-09 10:44:19.095 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:44:19 compute-0 nova_compute[189493]: 2025-12-09 10:44:19.097 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:44:19 compute-0 sudo[236367]: pam_unix(sudo:session): session closed for user root
Dec 09 10:44:19 compute-0 sudo[236520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipbnzibscfzeijmoedccmqlbsdvobbys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277059.3268428-77-32268805749470/AnsiballZ_stat.py'
Dec 09 10:44:19 compute-0 sudo[236520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:44:19 compute-0 python3.9[236522]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 09 10:44:20 compute-0 sudo[236520]: pam_unix(sudo:session): session closed for user root
Dec 09 10:44:20 compute-0 sudo[236643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjaoisbbykumqecgyrngrevbnlgyiosf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277059.3268428-77-32268805749470/AnsiballZ_copy.py'
Dec 09 10:44:20 compute-0 sudo[236643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:44:20 compute-0 python3.9[236645]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765277059.3268428-77-32268805749470/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 09 10:44:20 compute-0 sudo[236643]: pam_unix(sudo:session): session closed for user root
Dec 09 10:44:20 compute-0 nova_compute[189493]: 2025-12-09 10:44:20.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:44:20 compute-0 nova_compute[189493]: 2025-12-09 10:44:20.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:44:20 compute-0 nova_compute[189493]: 2025-12-09 10:44:20.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 10:44:20 compute-0 nova_compute[189493]: 2025-12-09 10:44:20.860 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 10:44:20 compute-0 nova_compute[189493]: 2025-12-09 10:44:20.860 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:44:20 compute-0 nova_compute[189493]: 2025-12-09 10:44:20.860 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:44:20 compute-0 podman[236670]: 2025-12-09 10:44:20.933126466 +0000 UTC m=+0.086486783 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 10:44:21 compute-0 sudo[236812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbzyxvhmahruysgodoofskhtcewvkuta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765277060.9073484-92-210091547432013/AnsiballZ_systemd.py'
Dec 09 10:44:21 compute-0 sudo[236812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:44:21 compute-0 python3.9[236814]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 09 10:44:21 compute-0 systemd[1]: Stopping System Logging Service...
Dec 09 10:44:21 compute-0 nova_compute[189493]: 2025-12-09 10:44:21.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:44:21 compute-0 nova_compute[189493]: 2025-12-09 10:44:21.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:44:21 compute-0 nova_compute[189493]: 2025-12-09 10:44:21.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:44:22 compute-0 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] exiting on signal 15.
Dec 09 10:44:22 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Dec 09 10:44:22 compute-0 systemd[1]: Stopped System Logging Service.
Dec 09 10:44:22 compute-0 systemd[1]: rsyslog.service: Consumed 4.290s CPU time, 8.7M memory peak, read 0B from disk, written 6.8M to disk.
Dec 09 10:44:22 compute-0 systemd[1]: Starting System Logging Service...
Dec 09 10:44:22 compute-0 rsyslogd[236818]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="236818" x-info="https://www.rsyslog.com"] start
Dec 09 10:44:22 compute-0 systemd[1]: Started System Logging Service.
Dec 09 10:44:22 compute-0 rsyslogd[236818]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 10:44:22 compute-0 rsyslogd[236818]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Dec 09 10:44:22 compute-0 rsyslogd[236818]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Dec 09 10:44:22 compute-0 rsyslogd[236818]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Dec 09 10:44:22 compute-0 sudo[236812]: pam_unix(sudo:session): session closed for user root
Dec 09 10:44:22 compute-0 rsyslogd[236818]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Dec 09 10:44:22 compute-0 sshd-session[235348]: Connection closed by 192.168.122.30 port 54958
Dec 09 10:44:22 compute-0 sshd-session[235345]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:44:22 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Dec 09 10:44:22 compute-0 systemd[1]: session-28.scope: Consumed 11.417s CPU time.
Dec 09 10:44:22 compute-0 systemd-logind[806]: Session 28 logged out. Waiting for processes to exit.
Dec 09 10:44:22 compute-0 systemd-logind[806]: Removed session 28.
Dec 09 10:44:22 compute-0 nova_compute[189493]: 2025-12-09 10:44:22.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:44:22 compute-0 nova_compute[189493]: 2025-12-09 10:44:22.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:44:22 compute-0 nova_compute[189493]: 2025-12-09 10:44:22.871 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:44:22 compute-0 nova_compute[189493]: 2025-12-09 10:44:22.872 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:44:22 compute-0 nova_compute[189493]: 2025-12-09 10:44:22.872 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:44:22 compute-0 nova_compute[189493]: 2025-12-09 10:44:22.872 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.286 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.288 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.310 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.310 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.311 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.367 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.370 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5706MB free_disk=72.24060821533203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.371 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.372 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.625 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.626 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.707 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing inventories for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 09 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.792 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating ProviderTree inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 09 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.792 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 09 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.808 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing aggregate associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 09 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.837 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing trait associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX2,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 09 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.865 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.881 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.884 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.885 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:44:23 compute-0 podman[236848]: 2025-12-09 10:44:23.937471469 +0000 UTC m=+0.087838689 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-type=git, name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, io.openshift.tags=base rhel9, config_id=edpm)
Dec 09 10:44:25 compute-0 podman[236868]: 2025-12-09 10:44:25.978609576 +0000 UTC m=+0.118049014 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute)
Dec 09 10:44:26 compute-0 podman[236886]: 2025-12-09 10:44:26.122470213 +0000 UTC m=+0.133999959 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 09 10:44:29 compute-0 podman[203687]: time="2025-12-09T10:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:44:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 10:44:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4265 "" "Go-http-client/1.1"
Dec 09 10:44:31 compute-0 openstack_network_exporter[205823]: ERROR   10:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:44:31 compute-0 openstack_network_exporter[205823]: ERROR   10:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:44:31 compute-0 openstack_network_exporter[205823]: ERROR   10:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:44:31 compute-0 openstack_network_exporter[205823]: ERROR   10:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:44:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:44:31 compute-0 openstack_network_exporter[205823]: ERROR   10:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:44:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:44:31 compute-0 podman[236904]: 2025-12-09 10:44:31.974294088 +0000 UTC m=+0.118387173 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Dec 09 10:44:33 compute-0 podman[236924]: 2025-12-09 10:44:33.97202926 +0000 UTC m=+0.125383863 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:44:35 compute-0 podman[236949]: 2025-12-09 10:44:35.939621278 +0000 UTC m=+0.084192010 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 10:44:37 compute-0 sshd-session[236972]: Connection closed by authenticating user daemon 159.223.8.217 port 56662 [preauth]
Dec 09 10:44:44 compute-0 podman[236974]: 2025-12-09 10:44:44.784088636 +0000 UTC m=+0.107695791 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 10:44:47 compute-0 podman[236992]: 2025-12-09 10:44:47.971044785 +0000 UTC m=+0.116418160 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 09 10:44:51 compute-0 podman[237016]: 2025-12-09 10:44:51.948594462 +0000 UTC m=+0.098583762 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec 09 10:44:54 compute-0 podman[237036]: 2025-12-09 10:44:54.91613549 +0000 UTC m=+0.075282867 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, architecture=x86_64, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, name=ubi9, container_name=kepler, config_id=edpm, managed_by=edpm_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30)
Dec 09 10:44:56 compute-0 podman[237055]: 2025-12-09 10:44:56.924381194 +0000 UTC m=+0.073783146 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 09 10:44:56 compute-0 podman[237056]: 2025-12-09 10:44:56.970079786 +0000 UTC m=+0.124786135 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec 09 10:44:59 compute-0 podman[203687]: time="2025-12-09T10:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:44:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 10:44:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4265 "" "Go-http-client/1.1"
Dec 09 10:45:00 compute-0 sshd-session[237091]: Accepted publickey for zuul from 38.102.83.145 port 37766 ssh2: RSA SHA256:OoA6ymXz1bGWu/N8aYc4tZBvI5ffrgdXcLpAm+SU/Q8
Dec 09 10:45:00 compute-0 systemd-logind[806]: New session 29 of user zuul.
Dec 09 10:45:00 compute-0 systemd[1]: Started Session 29 of User zuul.
Dec 09 10:45:01 compute-0 sshd-session[237091]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 10:45:01 compute-0 openstack_network_exporter[205823]: ERROR   10:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:45:01 compute-0 openstack_network_exporter[205823]: ERROR   10:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:45:01 compute-0 openstack_network_exporter[205823]: ERROR   10:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:45:01 compute-0 openstack_network_exporter[205823]: ERROR   10:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:45:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:45:01 compute-0 openstack_network_exporter[205823]: ERROR   10:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:45:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:45:02 compute-0 podman[237243]: 2025-12-09 10:45:02.125294321 +0000 UTC m=+0.094355088 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, version=9.6, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 09 10:45:02 compute-0 python3[237280]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:45:04 compute-0 sudo[237521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzzsdhkevphiqlgjlmwrevkmiugbqdib ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765277103.622845-36966-205913173223599/AnsiballZ_command.py'
Dec 09 10:45:04 compute-0 sudo[237521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:45:04 compute-0 podman[237484]: 2025-12-09 10:45:04.282720037 +0000 UTC m=+0.179464637 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 09 10:45:04 compute-0 python3[237528]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:45:04 compute-0 sudo[237521]: pam_unix(sudo:session): session closed for user root
Dec 09 10:45:05 compute-0 sudo[237689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skrqyqaabbwymuedbtdnbcjagkefejnl ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765277104.8704853-36977-214415738022216/AnsiballZ_command.py'
Dec 09 10:45:05 compute-0 sudo[237689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:45:05 compute-0 python3[237691]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:45:06 compute-0 sudo[237689]: pam_unix(sudo:session): session closed for user root
Dec 09 10:45:06 compute-0 podman[237694]: 2025-12-09 10:45:06.929943418 +0000 UTC m=+0.083066953 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 10:45:08 compute-0 python3[237865]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 09 10:45:09 compute-0 sshd-session[237892]: Connection closed by authenticating user daemon 159.223.8.217 port 48022 [preauth]
Dec 09 10:45:09 compute-0 sudo[238018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzyzuitijvjwujrclqpscljqlfvjueam ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765277108.738505-37021-124555466927854/AnsiballZ_setup.py'
Dec 09 10:45:09 compute-0 sudo[238018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:45:09 compute-0 python3[238020]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 09 10:45:10 compute-0 sudo[238018]: pam_unix(sudo:session): session closed for user root
Dec 09 10:45:11 compute-0 sudo[238243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjbkrwdkgywjgdmjihbsfrshvcvszvyv ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765277111.194181-37050-270484929620035/AnsiballZ_command.py'
Dec 09 10:45:11 compute-0 sudo[238243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:45:11 compute-0 python3[238245]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:45:11 compute-0 sudo[238243]: pam_unix(sudo:session): session closed for user root
Dec 09 10:45:12 compute-0 sudo[238407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eixmqllvhsvfexhjcpmqmfkcgouxsmig ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765277112.3688989-37067-263038559897571/AnsiballZ_command.py'
Dec 09 10:45:12 compute-0 sudo[238407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 10:45:12 compute-0 python3[238409]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 10:45:13 compute-0 sudo[238407]: pam_unix(sudo:session): session closed for user root
Dec 09 10:45:14 compute-0 podman[238449]: 2025-12-09 10:45:14.93447046 +0000 UTC m=+0.088004151 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:45:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:45:16.973 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:45:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:45:16.974 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:45:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:45:16.974 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:45:18 compute-0 podman[238467]: 2025-12-09 10:45:18.921859088 +0000 UTC m=+0.078282661 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 09 10:45:19 compute-0 nova_compute[189493]: 2025-12-09 10:45:19.880 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:45:19 compute-0 nova_compute[189493]: 2025-12-09 10:45:19.881 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:45:21 compute-0 nova_compute[189493]: 2025-12-09 10:45:21.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:45:21 compute-0 nova_compute[189493]: 2025-12-09 10:45:21.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:45:21 compute-0 nova_compute[189493]: 2025-12-09 10:45:21.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 10:45:21 compute-0 nova_compute[189493]: 2025-12-09 10:45:21.866 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 10:45:21 compute-0 nova_compute[189493]: 2025-12-09 10:45:21.867 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:45:21 compute-0 nova_compute[189493]: 2025-12-09 10:45:21.868 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.861 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.861 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.862 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.862 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.895 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.896 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.896 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.896 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:45:22 compute-0 podman[238490]: 2025-12-09 10:45:22.949636258 +0000 UTC m=+0.100935660 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 09 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.234 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.236 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5661MB free_disk=72.23666000366211GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.236 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.236 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.316 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.317 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.346 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.360 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.361 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.361 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:45:24 compute-0 nova_compute[189493]: 2025-12-09 10:45:24.341 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:45:25 compute-0 podman[238509]: 2025-12-09 10:45:25.915277292 +0000 UTC m=+0.071039178 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 09 10:45:27 compute-0 podman[238528]: 2025-12-09 10:45:27.932831335 +0000 UTC m=+0.088205297 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 09 10:45:27 compute-0 podman[238529]: 2025-12-09 10:45:27.951873614 +0000 UTC m=+0.095720105 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:45:29 compute-0 podman[203687]: time="2025-12-09T10:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:45:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 10:45:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4267 "" "Go-http-client/1.1"
Dec 09 10:45:31 compute-0 openstack_network_exporter[205823]: ERROR   10:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:45:31 compute-0 openstack_network_exporter[205823]: ERROR   10:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:45:31 compute-0 openstack_network_exporter[205823]: ERROR   10:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:45:31 compute-0 openstack_network_exporter[205823]: ERROR   10:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:45:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:45:31 compute-0 openstack_network_exporter[205823]: ERROR   10:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:45:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:45:32 compute-0 podman[238562]: 2025-12-09 10:45:32.962523946 +0000 UTC m=+0.105940771 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git)
Dec 09 10:45:34 compute-0 podman[238583]: 2025-12-09 10:45:34.962796157 +0000 UTC m=+0.124693151 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 10:45:37 compute-0 podman[238609]: 2025-12-09 10:45:37.905315518 +0000 UTC m=+0.059439526 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 10:45:39 compute-0 sshd-session[238633]: Invalid user debian from 159.223.8.217 port 59430
Dec 09 10:45:39 compute-0 sshd-session[238633]: Connection closed by invalid user debian 159.223.8.217 port 59430 [preauth]
Dec 09 10:45:45 compute-0 podman[238635]: 2025-12-09 10:45:45.912171064 +0000 UTC m=+0.073972781 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd)
Dec 09 10:45:49 compute-0 podman[238656]: 2025-12-09 10:45:49.935629485 +0000 UTC m=+0.088716181 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 09 10:45:53 compute-0 podman[238679]: 2025-12-09 10:45:53.928389651 +0000 UTC m=+0.081016086 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 10:45:56 compute-0 podman[238698]: 2025-12-09 10:45:56.942837443 +0000 UTC m=+0.095141559 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 09 10:45:58 compute-0 podman[238719]: 2025-12-09 10:45:58.970579896 +0000 UTC m=+0.120692585 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 09 10:45:58 compute-0 podman[238718]: 2025-12-09 10:45:58.997813773 +0000 UTC m=+0.139788883 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 09 10:45:59 compute-0 podman[203687]: time="2025-12-09T10:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:45:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 10:45:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4264 "" "Go-http-client/1.1"
Dec 09 10:46:01 compute-0 openstack_network_exporter[205823]: ERROR   10:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:46:01 compute-0 openstack_network_exporter[205823]: ERROR   10:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:46:01 compute-0 openstack_network_exporter[205823]: ERROR   10:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:46:01 compute-0 openstack_network_exporter[205823]: ERROR   10:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:46:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:46:01 compute-0 openstack_network_exporter[205823]: ERROR   10:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:46:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:46:03 compute-0 podman[238756]: 2025-12-09 10:46:03.998638306 +0000 UTC m=+0.134382511 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350)
Dec 09 10:46:06 compute-0 podman[238777]: 2025-12-09 10:46:06.006876215 +0000 UTC m=+0.149133086 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:46:08 compute-0 podman[238804]: 2025-12-09 10:46:08.947327651 +0000 UTC m=+0.094206990 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 10:46:10 compute-0 sshd-session[238827]: Invalid user debian from 159.223.8.217 port 53744
Dec 09 10:46:10 compute-0 sshd-session[238827]: Connection closed by invalid user debian 159.223.8.217 port 53744 [preauth]
Dec 09 10:46:12 compute-0 sshd-session[237094]: Received disconnect from 38.102.83.145 port 37766:11: disconnected by user
Dec 09 10:46:12 compute-0 sshd-session[237094]: Disconnected from user zuul 38.102.83.145 port 37766
Dec 09 10:46:12 compute-0 sshd-session[237091]: pam_unix(sshd:session): session closed for user zuul
Dec 09 10:46:12 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Dec 09 10:46:12 compute-0 systemd[1]: session-29.scope: Consumed 10.092s CPU time.
Dec 09 10:46:12 compute-0 systemd-logind[806]: Session 29 logged out. Waiting for processes to exit.
Dec 09 10:46:12 compute-0 systemd-logind[806]: Removed session 29.
Dec 09 10:46:16 compute-0 podman[238829]: 2025-12-09 10:46:16.943333212 +0000 UTC m=+0.104353877 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202)
Dec 09 10:46:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:46:16.974 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:46:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:46:16.975 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:46:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:46:16.975 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:46:18 compute-0 nova_compute[189493]: 2025-12-09 10:46:18.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:46:20 compute-0 nova_compute[189493]: 2025-12-09 10:46:20.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:46:20 compute-0 podman[238848]: 2025-12-09 10:46:20.908181518 +0000 UTC m=+0.065767441 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 09 10:46:22 compute-0 nova_compute[189493]: 2025-12-09 10:46:22.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:46:22 compute-0 nova_compute[189493]: 2025-12-09 10:46:22.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:46:22 compute-0 nova_compute[189493]: 2025-12-09 10:46:22.990 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:46:22 compute-0 nova_compute[189493]: 2025-12-09 10:46:22.991 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:46:22 compute-0 nova_compute[189493]: 2025-12-09 10:46:22.992 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:46:22 compute-0 nova_compute[189493]: 2025-12-09 10:46:22.993 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.286 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.287 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.405 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.406 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5712MB free_disk=72.23675155639648GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.406 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.406 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.554 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.555 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.583 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.676 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.679 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.679 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.681 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.681 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.682 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.696 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.697 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.697 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.697 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:46:24 compute-0 podman[238872]: 2025-12-09 10:46:24.958736006 +0000 UTC m=+0.115915071 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 09 10:46:27 compute-0 podman[238890]: 2025-12-09 10:46:27.962251777 +0000 UTC m=+0.114013828 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=)
Dec 09 10:46:29 compute-0 podman[203687]: time="2025-12-09T10:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:46:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 10:46:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4268 "" "Go-http-client/1.1"
Dec 09 10:46:29 compute-0 podman[238910]: 2025-12-09 10:46:29.948979051 +0000 UTC m=+0.097268476 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 09 10:46:29 compute-0 podman[238909]: 2025-12-09 10:46:29.952599084 +0000 UTC m=+0.103286976 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 09 10:46:31 compute-0 openstack_network_exporter[205823]: ERROR   10:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:46:31 compute-0 openstack_network_exporter[205823]: ERROR   10:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:46:31 compute-0 openstack_network_exporter[205823]: ERROR   10:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:46:31 compute-0 openstack_network_exporter[205823]: ERROR   10:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:46:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:46:31 compute-0 openstack_network_exporter[205823]: ERROR   10:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:46:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:46:34 compute-0 podman[238949]: 2025-12-09 10:46:34.956874192 +0000 UTC m=+0.104229312 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Dec 09 10:46:36 compute-0 podman[238969]: 2025-12-09 10:46:36.99132506 +0000 UTC m=+0.139218987 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:46:39 compute-0 podman[238994]: 2025-12-09 10:46:39.950612056 +0000 UTC m=+0.102768321 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 10:46:40 compute-0 sshd-session[239001]: Invalid user debian from 159.223.8.217 port 58038
Dec 09 10:46:40 compute-0 sshd-session[239001]: Connection closed by invalid user debian 159.223.8.217 port 58038 [preauth]
Dec 09 10:46:47 compute-0 podman[239020]: 2025-12-09 10:46:47.958209331 +0000 UTC m=+0.099307265 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 09 10:46:51 compute-0 podman[239041]: 2025-12-09 10:46:51.929323734 +0000 UTC m=+0.087590625 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 09 10:46:52 compute-0 sshd-session[239065]: Received disconnect from 193.46.255.20 port 12193:11:  [preauth]
Dec 09 10:46:52 compute-0 sshd-session[239065]: Disconnected from authenticating user root 193.46.255.20 port 12193 [preauth]
Dec 09 10:46:55 compute-0 podman[239067]: 2025-12-09 10:46:55.941941664 +0000 UTC m=+0.096495095 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 10:46:58 compute-0 podman[239087]: 2025-12-09 10:46:58.985176992 +0000 UTC m=+0.131771797 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, managed_by=edpm_ansible, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 09 10:46:59 compute-0 podman[203687]: time="2025-12-09T10:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:46:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 10:46:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4272 "" "Go-http-client/1.1"
Dec 09 10:47:00 compute-0 podman[239106]: 2025-12-09 10:47:00.952457739 +0000 UTC m=+0.087627825 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 09 10:47:00 compute-0 podman[239105]: 2025-12-09 10:47:00.971170476 +0000 UTC m=+0.105075366 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:47:01 compute-0 openstack_network_exporter[205823]: ERROR   10:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:47:01 compute-0 openstack_network_exporter[205823]: ERROR   10:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:47:01 compute-0 openstack_network_exporter[205823]: ERROR   10:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:47:01 compute-0 openstack_network_exporter[205823]: ERROR   10:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:47:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:47:01 compute-0 openstack_network_exporter[205823]: ERROR   10:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:47:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:47:05 compute-0 podman[239143]: 2025-12-09 10:47:05.974344485 +0000 UTC m=+0.111226299 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, release=1755695350, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=)
Dec 09 10:47:08 compute-0 podman[239164]: 2025-12-09 10:47:08.034489594 +0000 UTC m=+0.174129208 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 09 10:47:09 compute-0 sshd-session[239190]: Invalid user debian from 159.223.8.217 port 51298
Dec 09 10:47:09 compute-0 sshd-session[239190]: Connection closed by invalid user debian 159.223.8.217 port 51298 [preauth]
Dec 09 10:47:10 compute-0 podman[239192]: 2025-12-09 10:47:10.953101497 +0000 UTC m=+0.110453997 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 10:47:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:47:16.975 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:47:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:47:16.976 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:47:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:47:16.976 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:47:18 compute-0 podman[239216]: 2025-12-09 10:47:18.941085821 +0000 UTC m=+0.097924615 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 09 10:47:19 compute-0 nova_compute[189493]: 2025-12-09 10:47:19.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:47:21 compute-0 nova_compute[189493]: 2025-12-09 10:47:21.839 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:47:22 compute-0 nova_compute[189493]: 2025-12-09 10:47:22.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:47:22 compute-0 podman[239236]: 2025-12-09 10:47:22.926020462 +0000 UTC m=+0.076340358 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 10:47:22 compute-0 nova_compute[189493]: 2025-12-09 10:47:22.927 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:47:23 compute-0 nova_compute[189493]: 2025-12-09 10:47:23.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:47:23 compute-0 nova_compute[189493]: 2025-12-09 10:47:23.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:47:23 compute-0 nova_compute[189493]: 2025-12-09 10:47:23.880 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:47:23 compute-0 nova_compute[189493]: 2025-12-09 10:47:23.882 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:47:23 compute-0 nova_compute[189493]: 2025-12-09 10:47:23.882 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:47:23 compute-0 nova_compute[189493]: 2025-12-09 10:47:23.883 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.334 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.335 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5709MB free_disk=72.23674774169922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.336 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.336 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.425 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.426 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.450 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.471 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.472 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.472 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:47:25 compute-0 nova_compute[189493]: 2025-12-09 10:47:25.472 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:47:25 compute-0 nova_compute[189493]: 2025-12-09 10:47:25.472 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:47:25 compute-0 nova_compute[189493]: 2025-12-09 10:47:25.473 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:47:25 compute-0 nova_compute[189493]: 2025-12-09 10:47:25.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:47:25 compute-0 nova_compute[189493]: 2025-12-09 10:47:25.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:47:25 compute-0 nova_compute[189493]: 2025-12-09 10:47:25.844 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 10:47:25 compute-0 nova_compute[189493]: 2025-12-09 10:47:25.863 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 10:47:25 compute-0 nova_compute[189493]: 2025-12-09 10:47:25.864 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:47:26 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:47:26.415 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 10:47:26 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:47:26.417 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 09 10:47:26 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:47:26.418 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:47:26 compute-0 podman[239259]: 2025-12-09 10:47:26.992397905 +0000 UTC m=+0.141363088 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 09 10:47:29 compute-0 podman[203687]: time="2025-12-09T10:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:47:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 10:47:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4264 "" "Go-http-client/1.1"
Dec 09 10:47:29 compute-0 podman[239278]: 2025-12-09 10:47:29.950664563 +0000 UTC m=+0.104371166 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, name=ubi9, config_id=edpm, container_name=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec 09 10:47:31 compute-0 openstack_network_exporter[205823]: ERROR   10:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:47:31 compute-0 openstack_network_exporter[205823]: ERROR   10:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:47:31 compute-0 openstack_network_exporter[205823]: ERROR   10:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:47:31 compute-0 openstack_network_exporter[205823]: ERROR   10:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:47:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:47:31 compute-0 openstack_network_exporter[205823]: ERROR   10:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:47:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:47:31 compute-0 podman[239297]: 2025-12-09 10:47:31.97985011 +0000 UTC m=+0.127733223 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 09 10:47:31 compute-0 podman[239298]: 2025-12-09 10:47:31.981381834 +0000 UTC m=+0.120924663 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 09 10:47:36 compute-0 podman[239333]: 2025-12-09 10:47:36.960220679 +0000 UTC m=+0.116744695 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal)
Dec 09 10:47:38 compute-0 sshd-session[239354]: Invalid user debian from 159.223.8.217 port 40282
Dec 09 10:47:38 compute-0 sshd-session[239354]: Connection closed by invalid user debian 159.223.8.217 port 40282 [preauth]
Dec 09 10:47:38 compute-0 podman[239356]: 2025-12-09 10:47:38.461278753 +0000 UTC m=+0.127406626 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 09 10:47:41 compute-0 podman[239380]: 2025-12-09 10:47:41.99189696 +0000 UTC m=+0.130901173 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 10:47:49 compute-0 podman[239404]: 2025-12-09 10:47:49.985187784 +0000 UTC m=+0.126042327 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 09 10:47:53 compute-0 podman[239424]: 2025-12-09 10:47:53.991738234 +0000 UTC m=+0.134340390 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 10:47:57 compute-0 podman[239447]: 2025-12-09 10:47:57.9665182 +0000 UTC m=+0.124227997 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 09 10:47:59 compute-0 podman[203687]: time="2025-12-09T10:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:47:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 10:47:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4275 "" "Go-http-client/1.1"
Dec 09 10:48:00 compute-0 podman[239467]: 2025-12-09 10:48:00.910064353 +0000 UTC m=+0.068047875 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vcs-type=git, config_id=edpm, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, name=ubi9, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9)
Dec 09 10:48:01 compute-0 openstack_network_exporter[205823]: ERROR   10:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:48:01 compute-0 openstack_network_exporter[205823]: ERROR   10:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:48:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:48:01 compute-0 openstack_network_exporter[205823]: ERROR   10:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:48:01 compute-0 openstack_network_exporter[205823]: ERROR   10:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:48:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:48:01 compute-0 openstack_network_exporter[205823]: ERROR   10:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:48:02 compute-0 podman[239487]: 2025-12-09 10:48:02.974430012 +0000 UTC m=+0.114413679 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 10:48:02 compute-0 podman[239488]: 2025-12-09 10:48:02.993367205 +0000 UTC m=+0.125976875 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 10:48:07 compute-0 sshd-session[239525]: Invalid user debian from 159.223.8.217 port 56228
Dec 09 10:48:07 compute-0 sshd-session[239525]: Connection closed by invalid user debian 159.223.8.217 port 56228 [preauth]
Dec 09 10:48:07 compute-0 podman[239527]: 2025-12-09 10:48:07.409091665 +0000 UTC m=+0.098661327 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 09 10:48:09 compute-0 podman[239547]: 2025-12-09 10:48:09.016966653 +0000 UTC m=+0.165657981 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 09 10:48:12 compute-0 podman[239572]: 2025-12-09 10:48:12.996400125 +0000 UTC m=+0.139659992 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 10:48:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:16.977 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:48:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:16.977 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:48:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:16.977 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:48:20 compute-0 podman[239595]: 2025-12-09 10:48:20.902248844 +0000 UTC m=+0.060730809 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 09 10:48:21 compute-0 nova_compute[189493]: 2025-12-09 10:48:21.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:48:22 compute-0 nova_compute[189493]: 2025-12-09 10:48:22.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:48:22 compute-0 nova_compute[189493]: 2025-12-09 10:48:22.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.287 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.288 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.290 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.292 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.292 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.294 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:48:24 compute-0 nova_compute[189493]: 2025-12-09 10:48:24.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:48:24 compute-0 nova_compute[189493]: 2025-12-09 10:48:24.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:48:24 compute-0 nova_compute[189493]: 2025-12-09 10:48:24.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:48:24 compute-0 nova_compute[189493]: 2025-12-09 10:48:24.877 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:48:24 compute-0 nova_compute[189493]: 2025-12-09 10:48:24.879 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:48:24 compute-0 nova_compute[189493]: 2025-12-09 10:48:24.879 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:48:24 compute-0 nova_compute[189493]: 2025-12-09 10:48:24.880 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:48:24 compute-0 podman[239616]: 2025-12-09 10:48:24.96483101 +0000 UTC m=+0.098448569 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 09 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.258 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.260 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5692MB free_disk=72.23674774169922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.260 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.261 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.342 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.343 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.384 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.396 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.398 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.398 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:48:27 compute-0 nova_compute[189493]: 2025-12-09 10:48:27.398 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:48:27 compute-0 nova_compute[189493]: 2025-12-09 10:48:27.398 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:48:27 compute-0 nova_compute[189493]: 2025-12-09 10:48:27.398 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 10:48:27 compute-0 nova_compute[189493]: 2025-12-09 10:48:27.419 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 10:48:27 compute-0 nova_compute[189493]: 2025-12-09 10:48:27.421 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:48:27 compute-0 nova_compute[189493]: 2025-12-09 10:48:27.421 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:48:27 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:27.496 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 10:48:27 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:27.498 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 09 10:48:27 compute-0 nova_compute[189493]: 2025-12-09 10:48:27.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:48:28 compute-0 podman[239638]: 2025-12-09 10:48:28.959328291 +0000 UTC m=+0.096781993 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 09 10:48:29 compute-0 podman[203687]: time="2025-12-09T10:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:48:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 10:48:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4273 "" "Go-http-client/1.1"
Dec 09 10:48:31 compute-0 openstack_network_exporter[205823]: ERROR   10:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:48:31 compute-0 openstack_network_exporter[205823]: ERROR   10:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:48:31 compute-0 openstack_network_exporter[205823]: ERROR   10:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:48:31 compute-0 openstack_network_exporter[205823]: ERROR   10:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:48:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:48:31 compute-0 openstack_network_exporter[205823]: ERROR   10:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:48:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:48:31 compute-0 podman[239657]: 2025-12-09 10:48:31.95005747 +0000 UTC m=+0.102072773 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 09 10:48:33 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:33.500 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:48:33 compute-0 podman[239677]: 2025-12-09 10:48:33.964878751 +0000 UTC m=+0.102152646 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec 09 10:48:33 compute-0 podman[239676]: 2025-12-09 10:48:33.987405551 +0000 UTC m=+0.126076776 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:48:36 compute-0 sshd-session[239716]: Invalid user debian from 159.223.8.217 port 49216
Dec 09 10:48:36 compute-0 sshd-session[239716]: Connection closed by invalid user debian 159.223.8.217 port 49216 [preauth]
Dec 09 10:48:37 compute-0 podman[239718]: 2025-12-09 10:48:37.950388576 +0000 UTC m=+0.104177473 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible)
Dec 09 10:48:40 compute-0 podman[239740]: 2025-12-09 10:48:40.023204765 +0000 UTC m=+0.162813110 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.043 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.044 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.065 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.207 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.209 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.224 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.225 189497 INFO nova.compute.claims [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Claim successful on node compute-0.ctlplane.example.com
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.349 189497 DEBUG nova.compute.provider_tree [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.370 189497 DEBUG nova.scheduler.client.report [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.388 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.389 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.436 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.437 189497 DEBUG nova.network.neutron [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.461 189497 INFO nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.501 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.583 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.586 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.587 189497 INFO nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Creating image(s)
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.589 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.590 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.592 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.593 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.594 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:48:42 compute-0 nova_compute[189493]: 2025-12-09 10:48:42.559 189497 WARNING oslo_policy.policy [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 09 10:48:42 compute-0 nova_compute[189493]: 2025-12-09 10:48:42.560 189497 WARNING oslo_policy.policy [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 09 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.035 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.138 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798.part --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.140 189497 DEBUG nova.virt.images [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] 53d12211-5d5c-4333-b3ee-e3dcf1663767 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 09 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.143 189497 DEBUG nova.privsep.utils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 09 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.144 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798.part /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.633 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798.part /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798.converted" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.642 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.731 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798.converted --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.734 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.764 189497 INFO oslo.privsep.daemon [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp35j06s_p/privsep.sock']
Dec 09 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.966 189497 DEBUG nova.network.neutron [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Successfully created port: 2c684388-b6d9-4de0-8691-29807fabed2c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 09 10:48:43 compute-0 podman[239783]: 2025-12-09 10:48:43.972556383 +0000 UTC m=+0.117143572 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.610 189497 INFO oslo.privsep.daemon [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Spawned new privsep daemon via rootwrap
Dec 09 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.389 239806 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 09 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.393 239806 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 09 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.395 239806 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 09 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.395 239806 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239806
Dec 09 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.704 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.763 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.764 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.765 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.783 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.852 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.853 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798,backing_fmt=raw /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.923 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798,backing_fmt=raw /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk 1073741824" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.924 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.925 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.996 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.998 189497 DEBUG nova.virt.disk.api [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Checking if we can resize image /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.999 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.057 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.060 189497 DEBUG nova.virt.disk.api [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Cannot resize image /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.061 189497 DEBUG nova.objects.instance [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'migration_context' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.497 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.498 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.500 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.501 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.502 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.503 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.552 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.553 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.611 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.613 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.638 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.735 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.737 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.737 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.761 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.821 189497 DEBUG nova.network.neutron [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Successfully updated port: 2c684388-b6d9-4de0-8691-29807fabed2c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.827 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.828 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.842 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.843 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.843 189497 DEBUG nova.network.neutron [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.867 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.869 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.870 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.958 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.959 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.959 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Ensure instance console log exists: /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.960 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.960 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.961 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:48:46 compute-0 nova_compute[189493]: 2025-12-09 10:48:46.053 189497 DEBUG nova.network.neutron [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 09 10:48:46 compute-0 nova_compute[189493]: 2025-12-09 10:48:46.360 189497 DEBUG nova.compute.manager [req-03540681-d93c-4b28-962d-20543abce1fc req-f5b1598e-12af-4775-baf4-aa4e7ed049f4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received event network-changed-2c684388-b6d9-4de0-8691-29807fabed2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 10:48:46 compute-0 nova_compute[189493]: 2025-12-09 10:48:46.361 189497 DEBUG nova.compute.manager [req-03540681-d93c-4b28-962d-20543abce1fc req-f5b1598e-12af-4775-baf4-aa4e7ed049f4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Refreshing instance network info cache due to event network-changed-2c684388-b6d9-4de0-8691-29807fabed2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 09 10:48:46 compute-0 nova_compute[189493]: 2025-12-09 10:48:46.361 189497 DEBUG oslo_concurrency.lockutils [req-03540681-d93c-4b28-962d-20543abce1fc req-f5b1598e-12af-4775-baf4-aa4e7ed049f4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.567 189497 DEBUG nova.network.neutron [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.675 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.676 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Instance network_info: |[{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.677 189497 DEBUG oslo_concurrency.lockutils [req-03540681-d93c-4b28-962d-20543abce1fc req-f5b1598e-12af-4775-baf4-aa4e7ed049f4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.678 189497 DEBUG nova.network.neutron [req-03540681-d93c-4b28-962d-20543abce1fc req-f5b1598e-12af-4775-baf4-aa4e7ed049f4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Refreshing network info cache for port 2c684388-b6d9-4de0-8691-29807fabed2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.682 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Start _get_guest_xml network_info=[{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T10:47:15Z,direct_url=<?>,disk_format='qcow2',id=53d12211-5d5c-4333-b3ee-e3dcf1663767,min_disk=0,min_ram=0,name='cirros',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T10:47:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'size': 0, 'image_id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}], 'ephemerals': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'device_type': 'disk', 'guest_format': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.704 189497 WARNING nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.714 189497 DEBUG nova.virt.libvirt.host [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.716 189497 DEBUG nova.virt.libvirt.host [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.722 189497 DEBUG nova.virt.libvirt.host [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.723 189497 DEBUG nova.virt.libvirt.host [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.724 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.724 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-09T10:47:21Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='cf91b364-8467-4d1e-8c92-f7d1fab99905',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T10:47:15Z,direct_url=<?>,disk_format='qcow2',id=53d12211-5d5c-4333-b3ee-e3dcf1663767,min_disk=0,min_ram=0,name='cirros',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T10:47:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.725 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.726 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.726 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.727 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.727 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.727 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.728 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.728 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.729 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.729 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.737 189497 DEBUG nova.privsep.utils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.740 189497 DEBUG nova.virt.libvirt.vif [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-09T10:48:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-o83aar8e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-09T10:48:41Z,user_data=None,user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.741 189497 DEBUG nova.network.os_vif_util [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.743 189497 DEBUG nova.network.os_vif_util [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:65:39,bridge_name='br-int',has_traffic_filtering=True,id=2c684388-b6d9-4de0-8691-29807fabed2c,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c684388-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.747 189497 DEBUG nova.objects.instance [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.918 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] End _get_guest_xml xml=<domain type="kvm">
Dec 09 10:48:49 compute-0 nova_compute[189493]:   <uuid>41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f</uuid>
Dec 09 10:48:49 compute-0 nova_compute[189493]:   <name>instance-00000001</name>
Dec 09 10:48:49 compute-0 nova_compute[189493]:   <memory>524288</memory>
Dec 09 10:48:49 compute-0 nova_compute[189493]:   <vcpu>1</vcpu>
Dec 09 10:48:49 compute-0 nova_compute[189493]:   <metadata>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <nova:name>test_0</nova:name>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <nova:creationTime>2025-12-09 10:48:49</nova:creationTime>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <nova:flavor name="m1.small">
Dec 09 10:48:49 compute-0 nova_compute[189493]:         <nova:memory>512</nova:memory>
Dec 09 10:48:49 compute-0 nova_compute[189493]:         <nova:disk>1</nova:disk>
Dec 09 10:48:49 compute-0 nova_compute[189493]:         <nova:swap>0</nova:swap>
Dec 09 10:48:49 compute-0 nova_compute[189493]:         <nova:ephemeral>1</nova:ephemeral>
Dec 09 10:48:49 compute-0 nova_compute[189493]:         <nova:vcpus>1</nova:vcpus>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       </nova:flavor>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <nova:owner>
Dec 09 10:48:49 compute-0 nova_compute[189493]:         <nova:user uuid="e6d3a937c2a74eb0816d9f63820935e0">admin</nova:user>
Dec 09 10:48:49 compute-0 nova_compute[189493]:         <nova:project uuid="736bbfddbeea47e3ac9d863ba120b8f2">admin</nova:project>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       </nova:owner>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <nova:root type="image" uuid="53d12211-5d5c-4333-b3ee-e3dcf1663767"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <nova:ports>
Dec 09 10:48:49 compute-0 nova_compute[189493]:         <nova:port uuid="2c684388-b6d9-4de0-8691-29807fabed2c">
Dec 09 10:48:49 compute-0 nova_compute[189493]:           <nova:ip type="fixed" address="192.168.0.250" ipVersion="4"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:         </nova:port>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       </nova:ports>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     </nova:instance>
Dec 09 10:48:49 compute-0 nova_compute[189493]:   </metadata>
Dec 09 10:48:49 compute-0 nova_compute[189493]:   <sysinfo type="smbios">
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <system>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <entry name="manufacturer">RDO</entry>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <entry name="product">OpenStack Compute</entry>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <entry name="serial">41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f</entry>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <entry name="uuid">41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f</entry>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <entry name="family">Virtual Machine</entry>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     </system>
Dec 09 10:48:49 compute-0 nova_compute[189493]:   </sysinfo>
Dec 09 10:48:49 compute-0 nova_compute[189493]:   <os>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <boot dev="hd"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <smbios mode="sysinfo"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:   </os>
Dec 09 10:48:49 compute-0 nova_compute[189493]:   <features>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <acpi/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <apic/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <vmcoreinfo/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:   </features>
Dec 09 10:48:49 compute-0 nova_compute[189493]:   <clock offset="utc">
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <timer name="pit" tickpolicy="delay"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <timer name="hpet" present="no"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:   </clock>
Dec 09 10:48:49 compute-0 nova_compute[189493]:   <cpu mode="host-model" match="exact">
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <topology sockets="1" cores="1" threads="1"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:   </cpu>
Dec 09 10:48:49 compute-0 nova_compute[189493]:   <devices>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <disk type="file" device="disk">
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <source file="/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <target dev="vda" bus="virtio"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     </disk>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <disk type="file" device="disk">
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <source file="/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <target dev="vdb" bus="virtio"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     </disk>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <disk type="file" device="cdrom">
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <driver name="qemu" type="raw" cache="none"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <source file="/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.config"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <target dev="sda" bus="sata"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     </disk>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <interface type="ethernet">
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <mac address="fa:16:3e:c7:65:39"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <model type="virtio"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <driver name="vhost" rx_queue_size="512"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <mtu size="1442"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <target dev="tap2c684388-b6"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     </interface>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <serial type="pty">
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <log file="/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/console.log" append="off"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     </serial>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <video>
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <model type="virtio"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     </video>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <input type="tablet" bus="usb"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <rng model="virtio">
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <backend model="random">/dev/urandom</backend>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     </rng>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <controller type="usb" index="0"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     <memballoon model="virtio">
Dec 09 10:48:49 compute-0 nova_compute[189493]:       <stats period="10"/>
Dec 09 10:48:49 compute-0 nova_compute[189493]:     </memballoon>
Dec 09 10:48:49 compute-0 nova_compute[189493]:   </devices>
Dec 09 10:48:49 compute-0 nova_compute[189493]: </domain>
Dec 09 10:48:49 compute-0 nova_compute[189493]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.921 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Preparing to wait for external event network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.922 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.922 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.923 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.924 189497 DEBUG nova.virt.libvirt.vif [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-09T10:48:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-o83aar8e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-09T10:48:41Z,user_data=None,user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.925 189497 DEBUG nova.network.os_vif_util [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.926 189497 DEBUG nova.network.os_vif_util [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:65:39,bridge_name='br-int',has_traffic_filtering=True,id=2c684388-b6d9-4de0-8691-29807fabed2c,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c684388-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.927 189497 DEBUG os_vif [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:65:39,bridge_name='br-int',has_traffic_filtering=True,id=2c684388-b6d9-4de0-8691-29807fabed2c,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c684388-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.996 189497 DEBUG ovsdbapp.backend.ovs_idl [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.997 189497 DEBUG ovsdbapp.backend.ovs_idl [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.997 189497 DEBUG ovsdbapp.backend.ovs_idl [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.997 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.998 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.999 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 09 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.000 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.002 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.005 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.018 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.018 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.018 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 09 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.019 189497 INFO oslo.privsep.daemon [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpq6wvstdd/privsep.sock']
Dec 09 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.828 189497 INFO oslo.privsep.daemon [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Spawned new privsep daemon via rootwrap
Dec 09 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.664 239844 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 09 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.675 239844 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 09 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.679 239844 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Dec 09 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.680 239844 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239844
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.173 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.175 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c684388-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.177 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c684388-b6, col_values=(('external_ids', {'iface-id': '2c684388-b6d9-4de0-8691-29807fabed2c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:65:39', 'vm-uuid': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.181 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:51 compute-0 NetworkManager[56302]: <info>  [1765277331.1830] manager: (tap2c684388-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.185 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.194 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.196 189497 INFO os_vif [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:65:39,bridge_name='br-int',has_traffic_filtering=True,id=2c684388-b6d9-4de0-8691-29807fabed2c,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c684388-b6')
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.289 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.290 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.290 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.291 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No VIF found with MAC fa:16:3e:c7:65:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.291 189497 INFO nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Using config drive
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.435 189497 DEBUG nova.network.neutron [req-03540681-d93c-4b28-962d-20543abce1fc req-f5b1598e-12af-4775-baf4-aa4e7ed049f4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated VIF entry in instance network info cache for port 2c684388-b6d9-4de0-8691-29807fabed2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.436 189497 DEBUG nova.network.neutron [req-03540681-d93c-4b28-962d-20543abce1fc req-f5b1598e-12af-4775-baf4-aa4e7ed049f4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.456 189497 DEBUG oslo_concurrency.lockutils [req-03540681-d93c-4b28-962d-20543abce1fc req-f5b1598e-12af-4775-baf4-aa4e7ed049f4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.896 189497 INFO nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Creating config drive at /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.config
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.900 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp2sdlf6h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:48:51 compute-0 podman[239850]: 2025-12-09 10:48:51.972139236 +0000 UTC m=+0.123605205 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 09 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.990 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.041 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp2sdlf6h" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:48:52 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec 09 10:48:52 compute-0 kernel: tap2c684388-b6: entered promiscuous mode
Dec 09 10:48:52 compute-0 NetworkManager[56302]: <info>  [1765277332.1876] manager: (tap2c684388-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.185 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:52 compute-0 ovn_controller[97780]: 2025-12-09T10:48:52Z|00027|binding|INFO|Claiming lport 2c684388-b6d9-4de0-8691-29807fabed2c for this chassis.
Dec 09 10:48:52 compute-0 ovn_controller[97780]: 2025-12-09T10:48:52Z|00028|binding|INFO|2c684388-b6d9-4de0-8691-29807fabed2c: Claiming fa:16:3e:c7:65:39 192.168.0.250
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.200 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.209 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:65:39 192.168.0.250'], port_security=['fa:16:3e:c7:65:39 192.168.0.250'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.250/24', 'neutron:device_id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5af7354-5afe-400a-9e13-5500648117d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd86dfae4-cfd5-480d-a50e-0084326b1439', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61df917c-633f-4b35-857d-39fd859caf35, chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], logical_port=2c684388-b6d9-4de0-8691-29807fabed2c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.212 106644 INFO neutron.agent.ovn.metadata.agent [-] Port 2c684388-b6d9-4de0-8691-29807fabed2c in datapath c5af7354-5afe-400a-9e13-5500648117d8 bound to our chassis
Dec 09 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.215 106644 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5af7354-5afe-400a-9e13-5500648117d8
Dec 09 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.217 106644 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp704nukzt/privsep.sock']
Dec 09 10:48:52 compute-0 systemd-udevd[239891]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 10:48:52 compute-0 NetworkManager[56302]: <info>  [1765277332.2628] device (tap2c684388-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 09 10:48:52 compute-0 NetworkManager[56302]: <info>  [1765277332.2636] device (tap2c684388-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.293 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:52 compute-0 systemd-machined[155790]: New machine qemu-1-instance-00000001.
Dec 09 10:48:52 compute-0 ovn_controller[97780]: 2025-12-09T10:48:52Z|00029|binding|INFO|Setting lport 2c684388-b6d9-4de0-8691-29807fabed2c ovn-installed in OVS
Dec 09 10:48:52 compute-0 ovn_controller[97780]: 2025-12-09T10:48:52Z|00030|binding|INFO|Setting lport 2c684388-b6d9-4de0-8691-29807fabed2c up in Southbound
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.303 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:52 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.565 189497 DEBUG nova.compute.manager [req-a8ffcb00-2077-4a5b-a3c9-79ee190a0feb req-fb701220-8ad7-4e18-9f71-7c09c7e06739 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received event network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.566 189497 DEBUG oslo_concurrency.lockutils [req-a8ffcb00-2077-4a5b-a3c9-79ee190a0feb req-fb701220-8ad7-4e18-9f71-7c09c7e06739 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.569 189497 DEBUG oslo_concurrency.lockutils [req-a8ffcb00-2077-4a5b-a3c9-79ee190a0feb req-fb701220-8ad7-4e18-9f71-7c09c7e06739 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.572 189497 DEBUG oslo_concurrency.lockutils [req-a8ffcb00-2077-4a5b-a3c9-79ee190a0feb req-fb701220-8ad7-4e18-9f71-7c09c7e06739 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.572 189497 DEBUG nova.compute.manager [req-a8ffcb00-2077-4a5b-a3c9-79ee190a0feb req-fb701220-8ad7-4e18-9f71-7c09c7e06739 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Processing event network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.669 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277332.6679087, 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.670 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] VM Started (Lifecycle Event)
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.674 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.682 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.695 189497 INFO nova.virt.libvirt.driver [-] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Instance spawned successfully.
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.695 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 09 10:48:52 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 09 10:48:52 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.828 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.838 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.878 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.879 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277332.668131, 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.879 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] VM Paused (Lifecycle Event)
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.903 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.910 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277332.6776702, 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.910 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] VM Resumed (Lifecycle Event)
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.946 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.952 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.953 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.953 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.954 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.954 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.955 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.960 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 09 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.981 106644 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 09 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.982 106644 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp704nukzt/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 09 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.820 239934 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 09 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.827 239934 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 09 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.832 239934 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Dec 09 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.832 239934 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239934
Dec 09 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.986 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0ba057-9def-41c7-ad8b-ed5b33718807]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.993 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 09 10:48:53 compute-0 nova_compute[189493]: 2025-12-09 10:48:53.023 189497 INFO nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Took 11.44 seconds to spawn the instance on the hypervisor.
Dec 09 10:48:53 compute-0 nova_compute[189493]: 2025-12-09 10:48:53.026 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 10:48:53 compute-0 nova_compute[189493]: 2025-12-09 10:48:53.148 189497 INFO nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Took 11.99 seconds to build instance.
Dec 09 10:48:53 compute-0 nova_compute[189493]: 2025-12-09 10:48:53.197 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:48:53 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:53.491 239934 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:48:53 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:53.491 239934 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:48:53 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:53.491 239934 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.046 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa791f0-3fa1-447b-8c19-64a5934562a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.048 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc5af7354-51 in ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 09 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.050 239934 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc5af7354-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 09 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.050 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0127dd-bc74-4e0d-9be7-14f3b039a7ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.053 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[44b95edd-e192-407b-b4f2-eff4c8961521]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.089 106757 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f681de-83c4-4ce4-b739-26b63f364715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.115 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[a73158ad-fe9b-44c0-96d1-8c957f8b4a79]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.119 106644 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp89wfnesc/privsep.sock']
Dec 09 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.813 106644 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 09 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.814 106644 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp89wfnesc/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 09 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.671 239949 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 09 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.679 239949 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 09 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.684 239949 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 09 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.684 239949 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239949
Dec 09 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.817 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[ec24217d-6a5d-404e-a8f2-1ee21776bd5a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:54 compute-0 nova_compute[189493]: 2025-12-09 10:48:54.825 189497 DEBUG nova.compute.manager [req-bd69f214-75ff-4354-b4a5-c1802e8485a7 req-f205e711-e80c-4902-96d8-8c1e43a8fbeb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received event network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 10:48:54 compute-0 nova_compute[189493]: 2025-12-09 10:48:54.825 189497 DEBUG oslo_concurrency.lockutils [req-bd69f214-75ff-4354-b4a5-c1802e8485a7 req-f205e711-e80c-4902-96d8-8c1e43a8fbeb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:48:54 compute-0 nova_compute[189493]: 2025-12-09 10:48:54.826 189497 DEBUG oslo_concurrency.lockutils [req-bd69f214-75ff-4354-b4a5-c1802e8485a7 req-f205e711-e80c-4902-96d8-8c1e43a8fbeb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:48:54 compute-0 nova_compute[189493]: 2025-12-09 10:48:54.826 189497 DEBUG oslo_concurrency.lockutils [req-bd69f214-75ff-4354-b4a5-c1802e8485a7 req-f205e711-e80c-4902-96d8-8c1e43a8fbeb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:48:54 compute-0 nova_compute[189493]: 2025-12-09 10:48:54.826 189497 DEBUG nova.compute.manager [req-bd69f214-75ff-4354-b4a5-c1802e8485a7 req-f205e711-e80c-4902-96d8-8c1e43a8fbeb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] No waiting events found dispatching network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 09 10:48:54 compute-0 nova_compute[189493]: 2025-12-09 10:48:54.827 189497 WARNING nova.compute.manager [req-bd69f214-75ff-4354-b4a5-c1802e8485a7 req-f205e711-e80c-4902-96d8-8c1e43a8fbeb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received unexpected event network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c for instance with vm_state active and task_state None.
Dec 09 10:48:55 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:55.317 239949 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:48:55 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:55.317 239949 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:48:55 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:55.317 239949 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:48:55 compute-0 podman[239954]: 2025-12-09 10:48:55.952745743 +0000 UTC m=+0.104571755 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 09 10:48:55 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:55.954 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[17701ff2-f5e5-4622-a71b-abcd6ef8131a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:55 compute-0 NetworkManager[56302]: <info>  [1765277335.9931] manager: (tapc5af7354-50): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Dec 09 10:48:55 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:55.991 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[432ae212-fe6d-4065-b8d8-80ac9f4f4fed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:56 compute-0 systemd-udevd[239983]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.029 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[8feb6f7e-b999-42fc-b6b9-df6572006093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.036 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[f8731ff5-1e4b-4f58-bfc8-fed1db4a0f1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:56 compute-0 NetworkManager[56302]: <info>  [1765277336.0722] device (tapc5af7354-50): carrier: link connected
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.079 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[99314e4f-7bca-4745-8ed7-37e2ca1d84b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.106 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[cc6c3bb9-1b1c-4507-9967-f63d7fa797be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5af7354-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:0d:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396027, 'reachable_time': 28193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240001, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.130 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9372a0-fd19-4eda-815f-9684ee6826d2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:da0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396027, 'tstamp': 396027}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240002, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.151 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[24b72489-153a-4665-8930-8707d83d15e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5af7354-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:0d:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396027, 'reachable_time': 28193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 240003, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:56 compute-0 nova_compute[189493]: 2025-12-09 10:48:56.184 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.192 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[401bfbeb-9d31-45f2-9d67-93de0af399bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.267 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[cc69b384-89cb-4846-b519-92b31c8fac7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.270 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5af7354-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.272 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.273 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5af7354-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:48:56 compute-0 nova_compute[189493]: 2025-12-09 10:48:56.276 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:56 compute-0 NetworkManager[56302]: <info>  [1765277336.2773] manager: (tapc5af7354-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Dec 09 10:48:56 compute-0 kernel: tapc5af7354-50: entered promiscuous mode
Dec 09 10:48:56 compute-0 nova_compute[189493]: 2025-12-09 10:48:56.283 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.286 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5af7354-50, col_values=(('external_ids', {'iface-id': '3eb47070-bc26-4827-a5a8-68152f05129c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:48:56 compute-0 nova_compute[189493]: 2025-12-09 10:48:56.289 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:56 compute-0 ovn_controller[97780]: 2025-12-09T10:48:56Z|00031|binding|INFO|Releasing lport 3eb47070-bc26-4827-a5a8-68152f05129c from this chassis (sb_readonly=0)
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.294 106644 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c5af7354-5afe-400a-9e13-5500648117d8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c5af7354-5afe-400a-9e13-5500648117d8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.296 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[6650579f-f3fe-4a26-bfc0-6ecd4a591320]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.297 106644 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: global
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     log         /dev/log local0 debug
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     log-tag     haproxy-metadata-proxy-c5af7354-5afe-400a-9e13-5500648117d8
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     user        root
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     group       root
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     maxconn     1024
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     pidfile     /var/lib/neutron/external/pids/c5af7354-5afe-400a-9e13-5500648117d8.pid.haproxy
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     daemon
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: defaults
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     log global
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     mode http
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     option httplog
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     option dontlognull
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     option http-server-close
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     option forwardfor
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     retries                 3
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     timeout http-request    30s
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     timeout connect         30s
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     timeout client          32s
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     timeout server          32s
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     timeout http-keep-alive 30s
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: listen listener
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     bind 169.254.169.254:80
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     server metadata /var/lib/neutron/metadata_proxy
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:     http-request add-header X-OVN-Network-ID c5af7354-5afe-400a-9e13-5500648117d8
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 09 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.301 106644 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'env', 'PROCESS_TAG=haproxy-c5af7354-5afe-400a-9e13-5500648117d8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c5af7354-5afe-400a-9e13-5500648117d8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 09 10:48:56 compute-0 nova_compute[189493]: 2025-12-09 10:48:56.321 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:56 compute-0 podman[240035]: 2025-12-09 10:48:56.855150642 +0000 UTC m=+0.095904598 container create c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 09 10:48:56 compute-0 podman[240035]: 2025-12-09 10:48:56.805335376 +0000 UTC m=+0.046089382 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 09 10:48:56 compute-0 systemd[1]: Started libpod-conmon-c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157.scope.
Dec 09 10:48:56 compute-0 systemd[1]: Started libcrun container.
Dec 09 10:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b24b7ae1cc8b90219deedb86d3b48361a8607a5826e7fa3b48e4b1d97a56504/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 09 10:48:56 compute-0 podman[240035]: 2025-12-09 10:48:56.983944404 +0000 UTC m=+0.224698350 container init c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 09 10:48:56 compute-0 nova_compute[189493]: 2025-12-09 10:48:56.992 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:48:56 compute-0 podman[240035]: 2025-12-09 10:48:56.998319543 +0000 UTC m=+0.239073469 container start c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 09 10:48:57 compute-0 neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8[240047]: [NOTICE]   (240052) : New worker (240054) forked
Dec 09 10:48:57 compute-0 neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8[240047]: [NOTICE]   (240052) : Loading success.
Dec 09 10:48:59 compute-0 podman[203687]: time="2025-12-09T10:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:48:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:48:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4752 "" "Go-http-client/1.1"
Dec 09 10:48:59 compute-0 podman[240063]: 2025-12-09 10:48:59.958743781 +0000 UTC m=+0.103384450 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec 09 10:49:01 compute-0 nova_compute[189493]: 2025-12-09 10:49:01.190 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:01 compute-0 openstack_network_exporter[205823]: ERROR   10:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:49:01 compute-0 openstack_network_exporter[205823]: ERROR   10:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:49:01 compute-0 openstack_network_exporter[205823]: ERROR   10:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:49:01 compute-0 openstack_network_exporter[205823]: ERROR   10:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:49:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:49:01 compute-0 openstack_network_exporter[205823]: ERROR   10:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:49:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:49:01 compute-0 nova_compute[189493]: 2025-12-09 10:49:01.995 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:02 compute-0 podman[240083]: 2025-12-09 10:49:02.184561071 +0000 UTC m=+0.151970172 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, container_name=kepler, vendor=Red Hat, Inc., release=1214.1726694543, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_id=edpm, vcs-type=git, io.openshift.tags=base rhel9, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 09 10:49:04 compute-0 podman[240105]: 2025-12-09 10:49:04.942989776 +0000 UTC m=+0.087387516 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec 09 10:49:04 compute-0 podman[240104]: 2025-12-09 10:49:04.959746862 +0000 UTC m=+0.110562805 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:49:05 compute-0 sshd-session[240141]: Invalid user debian from 159.223.8.217 port 34134
Dec 09 10:49:05 compute-0 sshd-session[240141]: Connection closed by invalid user debian 159.223.8.217 port 34134 [preauth]
Dec 09 10:49:06 compute-0 nova_compute[189493]: 2025-12-09 10:49:06.193 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:06 compute-0 nova_compute[189493]: 2025-12-09 10:49:06.996 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:07 compute-0 NetworkManager[56302]: <info>  [1765277347.7265] manager: (patch-br-int-to-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Dec 09 10:49:07 compute-0 nova_compute[189493]: 2025-12-09 10:49:07.726 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:07 compute-0 NetworkManager[56302]: <info>  [1765277347.7273] device (patch-br-int-to-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 10:49:07 compute-0 NetworkManager[56302]: <warn>  [1765277347.7275] device (patch-br-int-to-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 10:49:07 compute-0 NetworkManager[56302]: <info>  [1765277347.7282] manager: (patch-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Dec 09 10:49:07 compute-0 NetworkManager[56302]: <info>  [1765277347.7285] device (patch-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 09 10:49:07 compute-0 NetworkManager[56302]: <warn>  [1765277347.7286] device (patch-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 09 10:49:07 compute-0 NetworkManager[56302]: <info>  [1765277347.7291] manager: (patch-br-int-to-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Dec 09 10:49:07 compute-0 NetworkManager[56302]: <info>  [1765277347.7295] manager: (patch-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Dec 09 10:49:07 compute-0 NetworkManager[56302]: <info>  [1765277347.7340] device (patch-br-int-to-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 09 10:49:07 compute-0 NetworkManager[56302]: <info>  [1765277347.7343] device (patch-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 09 10:49:07 compute-0 ovn_controller[97780]: 2025-12-09T10:49:07Z|00032|binding|INFO|Releasing lport 3eb47070-bc26-4827-a5a8-68152f05129c from this chassis (sb_readonly=0)
Dec 09 10:49:07 compute-0 ovn_controller[97780]: 2025-12-09T10:49:07Z|00033|binding|INFO|Releasing lport 3eb47070-bc26-4827-a5a8-68152f05129c from this chassis (sb_readonly=0)
Dec 09 10:49:07 compute-0 nova_compute[189493]: 2025-12-09 10:49:07.776 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:07 compute-0 nova_compute[189493]: 2025-12-09 10:49:07.786 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:08 compute-0 nova_compute[189493]: 2025-12-09 10:49:08.576 189497 DEBUG nova.compute.manager [req-610a9a60-b8a4-42de-a3ed-7dc2808b2037 req-fe3e58d9-02df-47de-bc73-340568df1ff3 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received event network-changed-2c684388-b6d9-4de0-8691-29807fabed2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 10:49:08 compute-0 nova_compute[189493]: 2025-12-09 10:49:08.577 189497 DEBUG nova.compute.manager [req-610a9a60-b8a4-42de-a3ed-7dc2808b2037 req-fe3e58d9-02df-47de-bc73-340568df1ff3 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Refreshing instance network info cache due to event network-changed-2c684388-b6d9-4de0-8691-29807fabed2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 09 10:49:08 compute-0 nova_compute[189493]: 2025-12-09 10:49:08.578 189497 DEBUG oslo_concurrency.lockutils [req-610a9a60-b8a4-42de-a3ed-7dc2808b2037 req-fe3e58d9-02df-47de-bc73-340568df1ff3 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:49:08 compute-0 nova_compute[189493]: 2025-12-09 10:49:08.579 189497 DEBUG oslo_concurrency.lockutils [req-610a9a60-b8a4-42de-a3ed-7dc2808b2037 req-fe3e58d9-02df-47de-bc73-340568df1ff3 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:49:08 compute-0 nova_compute[189493]: 2025-12-09 10:49:08.580 189497 DEBUG nova.network.neutron [req-610a9a60-b8a4-42de-a3ed-7dc2808b2037 req-fe3e58d9-02df-47de-bc73-340568df1ff3 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Refreshing network info cache for port 2c684388-b6d9-4de0-8691-29807fabed2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 09 10:49:08 compute-0 podman[240144]: 2025-12-09 10:49:08.977627448 +0000 UTC m=+0.138607612 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 09 10:49:10 compute-0 nova_compute[189493]: 2025-12-09 10:49:10.850 189497 DEBUG nova.network.neutron [req-610a9a60-b8a4-42de-a3ed-7dc2808b2037 req-fe3e58d9-02df-47de-bc73-340568df1ff3 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated VIF entry in instance network info cache for port 2c684388-b6d9-4de0-8691-29807fabed2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 09 10:49:10 compute-0 nova_compute[189493]: 2025-12-09 10:49:10.850 189497 DEBUG nova.network.neutron [req-610a9a60-b8a4-42de-a3ed-7dc2808b2037 req-fe3e58d9-02df-47de-bc73-340568df1ff3 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:49:10 compute-0 nova_compute[189493]: 2025-12-09 10:49:10.873 189497 DEBUG oslo_concurrency.lockutils [req-610a9a60-b8a4-42de-a3ed-7dc2808b2037 req-fe3e58d9-02df-47de-bc73-340568df1ff3 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:49:10 compute-0 podman[240163]: 2025-12-09 10:49:10.990844133 +0000 UTC m=+0.134485915 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 09 10:49:11 compute-0 nova_compute[189493]: 2025-12-09 10:49:11.196 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:12 compute-0 nova_compute[189493]: 2025-12-09 10:49:11.999 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:14 compute-0 podman[240187]: 2025-12-09 10:49:14.795402224 +0000 UTC m=+0.114545159 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 10:49:16 compute-0 nova_compute[189493]: 2025-12-09 10:49:16.201 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:49:16.977 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:49:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:49:16.978 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:49:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:49:16.979 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:49:17 compute-0 nova_compute[189493]: 2025-12-09 10:49:17.002 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:18 compute-0 nova_compute[189493]: 2025-12-09 10:49:18.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:49:18 compute-0 nova_compute[189493]: 2025-12-09 10:49:18.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 09 10:49:21 compute-0 nova_compute[189493]: 2025-12-09 10:49:21.208 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:22 compute-0 nova_compute[189493]: 2025-12-09 10:49:22.005 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:23 compute-0 podman[240211]: 2025-12-09 10:49:23.001047877 +0000 UTC m=+0.146480446 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:49:23 compute-0 nova_compute[189493]: 2025-12-09 10:49:23.889 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:49:23 compute-0 nova_compute[189493]: 2025-12-09 10:49:23.890 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:49:23 compute-0 nova_compute[189493]: 2025-12-09 10:49:23.890 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:49:24 compute-0 nova_compute[189493]: 2025-12-09 10:49:24.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:49:24 compute-0 nova_compute[189493]: 2025-12-09 10:49:24.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:49:25 compute-0 nova_compute[189493]: 2025-12-09 10:49:25.858 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:49:25 compute-0 nova_compute[189493]: 2025-12-09 10:49:25.859 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:49:25 compute-0 nova_compute[189493]: 2025-12-09 10:49:25.891 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:49:25 compute-0 nova_compute[189493]: 2025-12-09 10:49:25.891 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:49:25 compute-0 nova_compute[189493]: 2025-12-09 10:49:25.891 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:49:25 compute-0 nova_compute[189493]: 2025-12-09 10:49:25.891 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.015 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.089 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.090 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.147 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.149 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.215 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.218 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.219 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.299 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.686 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.688 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5253MB free_disk=72.20544052124023GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.688 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.688 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:49:26 compute-0 podman[240247]: 2025-12-09 10:49:26.951953468 +0000 UTC m=+0.085875912 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 09 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.978 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.979 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.979 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.009 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.079 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing inventories for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 09 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.159 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating ProviderTree inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 09 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.159 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 09 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.176 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing aggregate associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 09 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.199 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing trait associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX2,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 09 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.240 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 09 10:49:27 compute-0 ovn_controller[97780]: 2025-12-09T10:49:27Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c7:65:39 192.168.0.250
Dec 09 10:49:27 compute-0 ovn_controller[97780]: 2025-12-09T10:49:27Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:65:39 192.168.0.250
Dec 09 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.725 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updated inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 09 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.726 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 09 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.727 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 09 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.992 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.993 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:49:28 compute-0 nova_compute[189493]: 2025-12-09 10:49:28.972 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:49:29 compute-0 nova_compute[189493]: 2025-12-09 10:49:29.132 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:49:29 compute-0 nova_compute[189493]: 2025-12-09 10:49:29.133 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:49:29 compute-0 nova_compute[189493]: 2025-12-09 10:49:29.134 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 10:49:29 compute-0 nova_compute[189493]: 2025-12-09 10:49:29.612 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:49:29 compute-0 nova_compute[189493]: 2025-12-09 10:49:29.613 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:49:29 compute-0 nova_compute[189493]: 2025-12-09 10:49:29.613 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 10:49:29 compute-0 nova_compute[189493]: 2025-12-09 10:49:29.614 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 10:49:29 compute-0 podman[203687]: time="2025-12-09T10:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:49:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:49:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4763 "" "Go-http-client/1.1"
Dec 09 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.881 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.905 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.906 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.907 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.907 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.908 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.908 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.909 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 09 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.924 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 09 10:49:30 compute-0 podman[240284]: 2025-12-09 10:49:30.931068983 +0000 UTC m=+0.088243880 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec 09 10:49:31 compute-0 nova_compute[189493]: 2025-12-09 10:49:31.221 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:31 compute-0 openstack_network_exporter[205823]: ERROR   10:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:49:31 compute-0 openstack_network_exporter[205823]: ERROR   10:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:49:31 compute-0 openstack_network_exporter[205823]: ERROR   10:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:49:31 compute-0 openstack_network_exporter[205823]: ERROR   10:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:49:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:49:31 compute-0 openstack_network_exporter[205823]: ERROR   10:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:49:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:49:32 compute-0 nova_compute[189493]: 2025-12-09 10:49:32.013 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:32 compute-0 sshd-session[240282]: Invalid user ubuntu from 58.82.169.249 port 37216
Dec 09 10:49:32 compute-0 sshd-session[240282]: Received disconnect from 58.82.169.249 port 37216:11:  [preauth]
Dec 09 10:49:32 compute-0 sshd-session[240282]: Disconnected from invalid user ubuntu 58.82.169.249 port 37216 [preauth]
Dec 09 10:49:32 compute-0 podman[240302]: 2025-12-09 10:49:32.942040414 +0000 UTC m=+0.097126203 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, container_name=kepler, distribution-scope=public, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm)
Dec 09 10:49:35 compute-0 sshd-session[240321]: Invalid user debian from 159.223.8.217 port 36736
Dec 09 10:49:35 compute-0 sshd-session[240321]: Connection closed by invalid user debian 159.223.8.217 port 36736 [preauth]
Dec 09 10:49:35 compute-0 podman[240324]: 2025-12-09 10:49:35.8450649 +0000 UTC m=+0.087529310 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute)
Dec 09 10:49:35 compute-0 podman[240323]: 2025-12-09 10:49:35.857640637 +0000 UTC m=+0.112025097 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 09 10:49:36 compute-0 nova_compute[189493]: 2025-12-09 10:49:36.226 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:37 compute-0 nova_compute[189493]: 2025-12-09 10:49:37.018 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:37 compute-0 ovn_controller[97780]: 2025-12-09T10:49:37Z|00034|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Dec 09 10:49:39 compute-0 podman[240361]: 2025-12-09 10:49:39.945987766 +0000 UTC m=+0.098417509 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Dec 09 10:49:41 compute-0 nova_compute[189493]: 2025-12-09 10:49:41.230 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:42 compute-0 nova_compute[189493]: 2025-12-09 10:49:42.022 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:42 compute-0 podman[240382]: 2025-12-09 10:49:42.067016246 +0000 UTC m=+0.214063228 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:49:44 compute-0 podman[240406]: 2025-12-09 10:49:44.947722857 +0000 UTC m=+0.092236854 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 10:49:46 compute-0 nova_compute[189493]: 2025-12-09 10:49:46.236 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:47 compute-0 nova_compute[189493]: 2025-12-09 10:49:47.023 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:48 compute-0 nova_compute[189493]: 2025-12-09 10:49:48.512 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:49:48 compute-0 nova_compute[189493]: 2025-12-09 10:49:48.545 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Triggering sync for uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 09 10:49:48 compute-0 nova_compute[189493]: 2025-12-09 10:49:48.546 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:49:48 compute-0 nova_compute[189493]: 2025-12-09 10:49:48.547 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:49:48 compute-0 nova_compute[189493]: 2025-12-09 10:49:48.588 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:49:51 compute-0 nova_compute[189493]: 2025-12-09 10:49:51.241 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:52 compute-0 nova_compute[189493]: 2025-12-09 10:49:52.027 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:53 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:49:53.942 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 10:49:53 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:49:53.943 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 09 10:49:53 compute-0 nova_compute[189493]: 2025-12-09 10:49:53.944 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:53 compute-0 podman[240431]: 2025-12-09 10:49:53.956507074 +0000 UTC m=+0.098564493 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec 09 10:49:56 compute-0 nova_compute[189493]: 2025-12-09 10:49:56.246 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:57 compute-0 nova_compute[189493]: 2025-12-09 10:49:57.029 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:49:57 compute-0 podman[240451]: 2025-12-09 10:49:57.919520672 +0000 UTC m=+0.074807159 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 10:49:59 compute-0 podman[203687]: time="2025-12-09T10:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:49:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:49:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4761 "" "Go-http-client/1.1"
Dec 09 10:49:59 compute-0 nova_compute[189493]: 2025-12-09 10:49:59.976 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:49:59 compute-0 nova_compute[189493]: 2025-12-09 10:49:59.978 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:49:59 compute-0 nova_compute[189493]: 2025-12-09 10:49:59.999 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.088 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.089 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.102 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.102 189497 INFO nova.compute.claims [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Claim successful on node compute-0.ctlplane.example.com
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.240 189497 DEBUG nova.compute.provider_tree [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.253 189497 DEBUG nova.scheduler.client.report [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.282 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.283 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.325 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.326 189497 DEBUG nova.network.neutron [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.349 189497 INFO nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.376 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.464 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.466 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.466 189497 INFO nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Creating image(s)
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.467 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.467 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.468 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.486 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.583 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.585 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.586 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.610 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.694 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.695 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798,backing_fmt=raw /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.738 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798,backing_fmt=raw /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk 1073741824" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.739 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.739 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.805 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.807 189497 DEBUG nova.virt.disk.api [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Checking if we can resize image /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.808 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.871 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.873 189497 DEBUG nova.virt.disk.api [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Cannot resize image /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.873 189497 DEBUG nova.objects.instance [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'migration_context' on Instance uuid 1bddf2bf-8932-4428-97d7-7342a7ec414b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.901 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.902 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.903 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.916 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.985 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.987 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.988 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.005 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.102 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.103 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.145 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.146 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.147 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.210 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.212 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.214 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Ensure instance console log exists: /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.215 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.216 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.217 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.251 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:01 compute-0 openstack_network_exporter[205823]: ERROR   10:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:50:01 compute-0 openstack_network_exporter[205823]: ERROR   10:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:50:01 compute-0 openstack_network_exporter[205823]: ERROR   10:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:50:01 compute-0 openstack_network_exporter[205823]: ERROR   10:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:50:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:50:01 compute-0 openstack_network_exporter[205823]: ERROR   10:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:50:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.683 189497 DEBUG nova.network.neutron [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Successfully updated port: 7819acf8-daa2-4391-96d4-ef33c260f794 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.701 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.701 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquired lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.702 189497 DEBUG nova.network.neutron [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.807 189497 DEBUG nova.compute.manager [req-252495f9-0493-4e2c-85e7-1505919e3e68 req-9f92104c-132e-457c-aaef-52b3f0016a9d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received event network-changed-7819acf8-daa2-4391-96d4-ef33c260f794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.808 189497 DEBUG nova.compute.manager [req-252495f9-0493-4e2c-85e7-1505919e3e68 req-9f92104c-132e-457c-aaef-52b3f0016a9d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Refreshing instance network info cache due to event network-changed-7819acf8-daa2-4391-96d4-ef33c260f794. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.808 189497 DEBUG oslo_concurrency.lockutils [req-252495f9-0493-4e2c-85e7-1505919e3e68 req-9f92104c-132e-457c-aaef-52b3f0016a9d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.872 189497 DEBUG nova.network.neutron [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 09 10:50:01 compute-0 podman[240503]: 2025-12-09 10:50:01.917205463 +0000 UTC m=+0.078363519 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.031 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.442 189497 DEBUG nova.network.neutron [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.464 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Releasing lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.465 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Instance network_info: |[{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.466 189497 DEBUG oslo_concurrency.lockutils [req-252495f9-0493-4e2c-85e7-1505919e3e68 req-9f92104c-132e-457c-aaef-52b3f0016a9d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquired lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.466 189497 DEBUG nova.network.neutron [req-252495f9-0493-4e2c-85e7-1505919e3e68 req-9f92104c-132e-457c-aaef-52b3f0016a9d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Refreshing network info cache for port 7819acf8-daa2-4391-96d4-ef33c260f794 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.473 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Start _get_guest_xml network_info=[{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T10:47:15Z,direct_url=<?>,disk_format='qcow2',id=53d12211-5d5c-4333-b3ee-e3dcf1663767,min_disk=0,min_ram=0,name='cirros',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T10:47:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'size': 0, 'image_id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}], 'ephemerals': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'device_type': 'disk', 'guest_format': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.484 189497 WARNING nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.505 189497 DEBUG nova.virt.libvirt.host [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.507 189497 DEBUG nova.virt.libvirt.host [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.513 189497 DEBUG nova.virt.libvirt.host [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.514 189497 DEBUG nova.virt.libvirt.host [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.514 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.515 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-09T10:47:21Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='cf91b364-8467-4d1e-8c92-f7d1fab99905',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T10:47:15Z,direct_url=<?>,disk_format='qcow2',id=53d12211-5d5c-4333-b3ee-e3dcf1663767,min_disk=0,min_ram=0,name='cirros',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T10:47:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.516 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.516 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.517 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.517 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.517 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.518 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.519 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.519 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.519 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.520 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.530 189497 DEBUG nova.virt.libvirt.vif [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-09T10:49:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l',id=2,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-ljrndswf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-09T10:50:00Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzE5NzQyNjYxODkxMjIwNjU2Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Dec 09 10:50:02 compute-0 nova_compute[189493]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzE5NzQyNjYxODkxMjIwNjU2Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0tLQo=',user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=1bddf2bf-8932-4428-97d7-7342a7ec414b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.531 189497 DEBUG nova.network.os_vif_util [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.534 189497 DEBUG nova.network.os_vif_util [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:4e:b4,bridge_name='br-int',has_traffic_filtering=True,id=7819acf8-daa2-4391-96d4-ef33c260f794,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7819acf8-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.536 189497 DEBUG nova.objects.instance [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1bddf2bf-8932-4428-97d7-7342a7ec414b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.552 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] End _get_guest_xml xml=<domain type="kvm">
Dec 09 10:50:02 compute-0 nova_compute[189493]:   <uuid>1bddf2bf-8932-4428-97d7-7342a7ec414b</uuid>
Dec 09 10:50:02 compute-0 nova_compute[189493]:   <name>instance-00000002</name>
Dec 09 10:50:02 compute-0 nova_compute[189493]:   <memory>524288</memory>
Dec 09 10:50:02 compute-0 nova_compute[189493]:   <vcpu>1</vcpu>
Dec 09 10:50:02 compute-0 nova_compute[189493]:   <metadata>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <nova:name>vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l</nova:name>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <nova:creationTime>2025-12-09 10:50:02</nova:creationTime>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <nova:flavor name="m1.small">
Dec 09 10:50:02 compute-0 nova_compute[189493]:         <nova:memory>512</nova:memory>
Dec 09 10:50:02 compute-0 nova_compute[189493]:         <nova:disk>1</nova:disk>
Dec 09 10:50:02 compute-0 nova_compute[189493]:         <nova:swap>0</nova:swap>
Dec 09 10:50:02 compute-0 nova_compute[189493]:         <nova:ephemeral>1</nova:ephemeral>
Dec 09 10:50:02 compute-0 nova_compute[189493]:         <nova:vcpus>1</nova:vcpus>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       </nova:flavor>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <nova:owner>
Dec 09 10:50:02 compute-0 nova_compute[189493]:         <nova:user uuid="e6d3a937c2a74eb0816d9f63820935e0">admin</nova:user>
Dec 09 10:50:02 compute-0 nova_compute[189493]:         <nova:project uuid="736bbfddbeea47e3ac9d863ba120b8f2">admin</nova:project>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       </nova:owner>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <nova:root type="image" uuid="53d12211-5d5c-4333-b3ee-e3dcf1663767"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <nova:ports>
Dec 09 10:50:02 compute-0 nova_compute[189493]:         <nova:port uuid="7819acf8-daa2-4391-96d4-ef33c260f794">
Dec 09 10:50:02 compute-0 nova_compute[189493]:           <nova:ip type="fixed" address="192.168.0.212" ipVersion="4"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:         </nova:port>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       </nova:ports>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     </nova:instance>
Dec 09 10:50:02 compute-0 nova_compute[189493]:   </metadata>
Dec 09 10:50:02 compute-0 nova_compute[189493]:   <sysinfo type="smbios">
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <system>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <entry name="manufacturer">RDO</entry>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <entry name="product">OpenStack Compute</entry>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <entry name="serial">1bddf2bf-8932-4428-97d7-7342a7ec414b</entry>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <entry name="uuid">1bddf2bf-8932-4428-97d7-7342a7ec414b</entry>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <entry name="family">Virtual Machine</entry>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     </system>
Dec 09 10:50:02 compute-0 nova_compute[189493]:   </sysinfo>
Dec 09 10:50:02 compute-0 nova_compute[189493]:   <os>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <boot dev="hd"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <smbios mode="sysinfo"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:   </os>
Dec 09 10:50:02 compute-0 nova_compute[189493]:   <features>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <acpi/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <apic/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <vmcoreinfo/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:   </features>
Dec 09 10:50:02 compute-0 nova_compute[189493]:   <clock offset="utc">
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <timer name="pit" tickpolicy="delay"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <timer name="hpet" present="no"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:   </clock>
Dec 09 10:50:02 compute-0 nova_compute[189493]:   <cpu mode="host-model" match="exact">
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <topology sockets="1" cores="1" threads="1"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:   </cpu>
Dec 09 10:50:02 compute-0 nova_compute[189493]:   <devices>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <disk type="file" device="disk">
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <source file="/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <target dev="vda" bus="virtio"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     </disk>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <disk type="file" device="disk">
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <source file="/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <target dev="vdb" bus="virtio"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     </disk>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <disk type="file" device="cdrom">
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <driver name="qemu" type="raw" cache="none"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <source file="/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.config"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <target dev="sda" bus="sata"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     </disk>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <interface type="ethernet">
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <mac address="fa:16:3e:01:4e:b4"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <model type="virtio"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <driver name="vhost" rx_queue_size="512"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <mtu size="1442"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <target dev="tap7819acf8-da"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     </interface>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <serial type="pty">
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <log file="/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/console.log" append="off"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     </serial>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <video>
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <model type="virtio"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     </video>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <input type="tablet" bus="usb"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <rng model="virtio">
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <backend model="random">/dev/urandom</backend>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     </rng>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <controller type="usb" index="0"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     <memballoon model="virtio">
Dec 09 10:50:02 compute-0 nova_compute[189493]:       <stats period="10"/>
Dec 09 10:50:02 compute-0 nova_compute[189493]:     </memballoon>
Dec 09 10:50:02 compute-0 nova_compute[189493]:   </devices>
Dec 09 10:50:02 compute-0 nova_compute[189493]: </domain>
Dec 09 10:50:02 compute-0 nova_compute[189493]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.552 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Preparing to wait for external event network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.553 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.553 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.553 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.554 189497 DEBUG nova.virt.libvirt.vif [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-09T10:49:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l',id=2,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-ljrndswf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-09T10:50:00Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzE5NzQyNjYxODkxMjIwNjU2Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Dec 09 10:50:02 compute-0 nova_compute[189493]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzE5NzQyNjYxODkxMjIwNjU2Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0tLQo=',user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=1bddf2bf-8932-4428-97d7-7342a7ec414b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.555 189497 DEBUG nova.network.os_vif_util [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.555 189497 DEBUG nova.network.os_vif_util [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:4e:b4,bridge_name='br-int',has_traffic_filtering=True,id=7819acf8-daa2-4391-96d4-ef33c260f794,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7819acf8-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.556 189497 DEBUG os_vif [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:4e:b4,bridge_name='br-int',has_traffic_filtering=True,id=7819acf8-daa2-4391-96d4-ef33c260f794,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7819acf8-da') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.557 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.557 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.558 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.569 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.570 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7819acf8-da, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.571 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7819acf8-da, col_values=(('external_ids', {'iface-id': '7819acf8-daa2-4391-96d4-ef33c260f794', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:4e:b4', 'vm-uuid': '1bddf2bf-8932-4428-97d7-7342a7ec414b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.574 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:02 compute-0 NetworkManager[56302]: <info>  [1765277402.5761] manager: (tap7819acf8-da): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.577 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.582 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.583 189497 INFO os_vif [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:4e:b4,bridge_name='br-int',has_traffic_filtering=True,id=7819acf8-daa2-4391-96d4-ef33c260f794,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7819acf8-da')
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.640 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.640 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.640 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.640 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No VIF found with MAC fa:16:3e:01:4e:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.641 189497 INFO nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Using config drive
Dec 09 10:50:02 compute-0 rsyslogd[236818]: message too long (8192) with configured size 8096, begin of message is: 2025-12-09 10:50:02.530 189497 DEBUG nova.virt.libvirt.vif [None req-94e35f23-c0 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 10:50:02 compute-0 rsyslogd[236818]: message too long (8192) with configured size 8096, begin of message is: 2025-12-09 10:50:02.554 189497 DEBUG nova.virt.libvirt.vif [None req-94e35f23-c0 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 10:50:02 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:02.945 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.965 189497 INFO nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Creating config drive at /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.config
Dec 09 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.975 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj7f79nqi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.123 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj7f79nqi" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:03 compute-0 kernel: tap7819acf8-da: entered promiscuous mode
Dec 09 10:50:03 compute-0 NetworkManager[56302]: <info>  [1765277403.2710] manager: (tap7819acf8-da): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Dec 09 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.272 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:03 compute-0 ovn_controller[97780]: 2025-12-09T10:50:03Z|00035|binding|INFO|Claiming lport 7819acf8-daa2-4391-96d4-ef33c260f794 for this chassis.
Dec 09 10:50:03 compute-0 ovn_controller[97780]: 2025-12-09T10:50:03Z|00036|binding|INFO|7819acf8-daa2-4391-96d4-ef33c260f794: Claiming fa:16:3e:01:4e:b4 192.168.0.212
Dec 09 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.280 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:4e:b4 192.168.0.212'], port_security=['fa:16:3e:01:4e:b4 192.168.0.212'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5eiooafn7y6w-x2vp5udxgoax-du67okrzyrz6-port-copozzjp5fc5', 'neutron:cidrs': '192.168.0.212/24', 'neutron:device_id': '1bddf2bf-8932-4428-97d7-7342a7ec414b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5af7354-5afe-400a-9e13-5500648117d8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5eiooafn7y6w-x2vp5udxgoax-du67okrzyrz6-port-copozzjp5fc5', 'neutron:project_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd86dfae4-cfd5-480d-a50e-0084326b1439', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61df917c-633f-4b35-857d-39fd859caf35, chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], logical_port=7819acf8-daa2-4391-96d4-ef33c260f794) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.282 106644 INFO neutron.agent.ovn.metadata.agent [-] Port 7819acf8-daa2-4391-96d4-ef33c260f794 in datapath c5af7354-5afe-400a-9e13-5500648117d8 bound to our chassis
Dec 09 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.283 106644 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5af7354-5afe-400a-9e13-5500648117d8
Dec 09 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.291 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:03 compute-0 ovn_controller[97780]: 2025-12-09T10:50:03Z|00037|binding|INFO|Setting lport 7819acf8-daa2-4391-96d4-ef33c260f794 ovn-installed in OVS
Dec 09 10:50:03 compute-0 ovn_controller[97780]: 2025-12-09T10:50:03Z|00038|binding|INFO|Setting lport 7819acf8-daa2-4391-96d4-ef33c260f794 up in Southbound
Dec 09 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.300 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.302 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.309 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[d7af2d4c-4cf6-48c3-9684-1d29fd335f9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:50:03 compute-0 systemd-udevd[240552]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 10:50:03 compute-0 NetworkManager[56302]: <info>  [1765277403.3349] device (tap7819acf8-da): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 09 10:50:03 compute-0 NetworkManager[56302]: <info>  [1765277403.3389] device (tap7819acf8-da): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 09 10:50:03 compute-0 systemd-machined[155790]: New machine qemu-2-instance-00000002.
Dec 09 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.344 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e68d45-bcc7-4369-9f51-d05763e56c82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.348 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[9a54e29d-9d59-48c0-a685-b4d8d040c753]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:50:03 compute-0 podman[240533]: 2025-12-09 10:50:03.357847378 +0000 UTC m=+0.114163158 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.29.0, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_id=edpm, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler)
Dec 09 10:50:03 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Dec 09 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.382 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[2f47f1d0-b27a-4250-9bca-8599296d2ba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.411 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[7a8b04a6-68b5-484d-b10a-853f5769777c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5af7354-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:0d:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396027, 'reachable_time': 28193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240567, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.430 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[4231f5c7-a8b3-49ae-b7f5-ff839d4d809f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396043, 'tstamp': 396043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240570, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396046, 'tstamp': 396046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240570, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.433 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5af7354-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.435 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.437 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5af7354-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.437 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 09 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.438 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5af7354-50, col_values=(('external_ids', {'iface-id': '3eb47070-bc26-4827-a5a8-68152f05129c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.438 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 09 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.545 189497 DEBUG nova.compute.manager [req-e5cc09a2-9601-4a82-b31a-6ad0e753f46c req-1e8ad80b-c369-4af6-8d08-64802be53951 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received event network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.545 189497 DEBUG oslo_concurrency.lockutils [req-e5cc09a2-9601-4a82-b31a-6ad0e753f46c req-1e8ad80b-c369-4af6-8d08-64802be53951 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.546 189497 DEBUG oslo_concurrency.lockutils [req-e5cc09a2-9601-4a82-b31a-6ad0e753f46c req-1e8ad80b-c369-4af6-8d08-64802be53951 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.546 189497 DEBUG oslo_concurrency.lockutils [req-e5cc09a2-9601-4a82-b31a-6ad0e753f46c req-1e8ad80b-c369-4af6-8d08-64802be53951 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.546 189497 DEBUG nova.compute.manager [req-e5cc09a2-9601-4a82-b31a-6ad0e753f46c req-1e8ad80b-c369-4af6-8d08-64802be53951 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Processing event network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.012 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.013 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277404.0121238, 1bddf2bf-8932-4428-97d7-7342a7ec414b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.014 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] VM Started (Lifecycle Event)
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.020 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.026 189497 INFO nova.virt.libvirt.driver [-] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Instance spawned successfully.
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.026 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.032 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.036 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.046 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.046 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.047 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.047 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.047 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.048 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.059 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.059 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277404.0122805, 1bddf2bf-8932-4428-97d7-7342a7ec414b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.060 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] VM Paused (Lifecycle Event)
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.083 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.089 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277404.0183938, 1bddf2bf-8932-4428-97d7-7342a7ec414b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.089 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] VM Resumed (Lifecycle Event)
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.107 189497 INFO nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Took 3.64 seconds to spawn the instance on the hypervisor.
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.108 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.109 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.118 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.153 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.174 189497 INFO nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Took 4.13 seconds to build instance.
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.191 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.260 189497 DEBUG nova.network.neutron [req-252495f9-0493-4e2c-85e7-1505919e3e68 req-9f92104c-132e-457c-aaef-52b3f0016a9d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updated VIF entry in instance network info cache for port 7819acf8-daa2-4391-96d4-ef33c260f794. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.260 189497 DEBUG nova.network.neutron [req-252495f9-0493-4e2c-85e7-1505919e3e68 req-9f92104c-132e-457c-aaef-52b3f0016a9d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.476 189497 DEBUG oslo_concurrency.lockutils [req-252495f9-0493-4e2c-85e7-1505919e3e68 req-9f92104c-132e-457c-aaef-52b3f0016a9d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Releasing lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:50:05 compute-0 nova_compute[189493]: 2025-12-09 10:50:05.624 189497 DEBUG nova.compute.manager [req-88d4ec09-bd40-4dc0-8eb1-33b71616363a req-927218af-638d-4a15-ae91-4182db365b57 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received event network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 10:50:05 compute-0 nova_compute[189493]: 2025-12-09 10:50:05.625 189497 DEBUG oslo_concurrency.lockutils [req-88d4ec09-bd40-4dc0-8eb1-33b71616363a req-927218af-638d-4a15-ae91-4182db365b57 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:50:05 compute-0 nova_compute[189493]: 2025-12-09 10:50:05.625 189497 DEBUG oslo_concurrency.lockutils [req-88d4ec09-bd40-4dc0-8eb1-33b71616363a req-927218af-638d-4a15-ae91-4182db365b57 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:50:05 compute-0 nova_compute[189493]: 2025-12-09 10:50:05.626 189497 DEBUG oslo_concurrency.lockutils [req-88d4ec09-bd40-4dc0-8eb1-33b71616363a req-927218af-638d-4a15-ae91-4182db365b57 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:50:05 compute-0 nova_compute[189493]: 2025-12-09 10:50:05.626 189497 DEBUG nova.compute.manager [req-88d4ec09-bd40-4dc0-8eb1-33b71616363a req-927218af-638d-4a15-ae91-4182db365b57 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] No waiting events found dispatching network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 09 10:50:05 compute-0 nova_compute[189493]: 2025-12-09 10:50:05.626 189497 WARNING nova.compute.manager [req-88d4ec09-bd40-4dc0-8eb1-33b71616363a req-927218af-638d-4a15-ae91-4182db365b57 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received unexpected event network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 for instance with vm_state active and task_state None.
Dec 09 10:50:06 compute-0 sshd-session[240582]: Invalid user debian from 159.223.8.217 port 33468
Dec 09 10:50:06 compute-0 sshd-session[240582]: Connection closed by invalid user debian 159.223.8.217 port 33468 [preauth]
Dec 09 10:50:06 compute-0 podman[240584]: 2025-12-09 10:50:06.564098475 +0000 UTC m=+0.120758505 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 09 10:50:06 compute-0 podman[240585]: 2025-12-09 10:50:06.571429454 +0000 UTC m=+0.121091824 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 09 10:50:07 compute-0 nova_compute[189493]: 2025-12-09 10:50:07.037 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:07 compute-0 nova_compute[189493]: 2025-12-09 10:50:07.576 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:11 compute-0 podman[240621]: 2025-12-09 10:50:11.008969312 +0000 UTC m=+0.136925474 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, maintainer=Red Hat, Inc., version=9.6, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7)
Dec 09 10:50:12 compute-0 nova_compute[189493]: 2025-12-09 10:50:12.041 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:12 compute-0 nova_compute[189493]: 2025-12-09 10:50:12.583 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:13 compute-0 podman[240643]: 2025-12-09 10:50:13.052401427 +0000 UTC m=+0.207176852 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:50:15 compute-0 podman[240669]: 2025-12-09 10:50:15.988884494 +0000 UTC m=+0.127874097 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 10:50:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:16.977 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:50:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:16.978 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:50:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:16.979 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:50:17 compute-0 nova_compute[189493]: 2025-12-09 10:50:17.042 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:17 compute-0 nova_compute[189493]: 2025-12-09 10:50:17.588 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:22 compute-0 nova_compute[189493]: 2025-12-09 10:50:22.044 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:22 compute-0 nova_compute[189493]: 2025-12-09 10:50:22.590 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.288 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.289 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.302 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 1bddf2bf-8932-4428-97d7-7342a7ec414b from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 09 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.751 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/1bddf2bf-8932-4428-97d7-7342a7ec414b -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}c39d506960fbc5044d0bc54d9594567a78a3d14170701e46780a30eef7979125" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.366 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Tue, 09 Dec 2025 10:50:23 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-fbb073f9-cf4a-40e5-9984-d6fe4fa8bd9a x-openstack-request-id: req-fbb073f9-cf4a-40e5-9984-d6fe4fa8bd9a _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.366 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "1bddf2bf-8932-4428-97d7-7342a7ec414b", "name": "vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l", "status": "ACTIVE", "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "user_id": "e6d3a937c2a74eb0816d9f63820935e0", "metadata": {"metering.server_group": "24f6e5b2-dd43-46f1-87a4-e2efc1300914"}, "hostId": "17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee", "image": {"id": "53d12211-5d5c-4333-b3ee-e3dcf1663767", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/53d12211-5d5c-4333-b3ee-e3dcf1663767"}]}, "flavor": {"id": "cf91b364-8467-4d1e-8c92-f7d1fab99905", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/cf91b364-8467-4d1e-8c92-f7d1fab99905"}]}, "created": "2025-12-09T10:49:58Z", "updated": "2025-12-09T10:50:04Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.212", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:01:4e:b4"}, {"version": 4, "addr": "192.168.122.172", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:01:4e:b4"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/1bddf2bf-8932-4428-97d7-7342a7ec414b"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/1bddf2bf-8932-4428-97d7-7342a7ec414b"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-09T10:50:04.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.366 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/1bddf2bf-8932-4428-97d7-7342a7ec414b used request id req-fbb073f9-cf4a-40e5-9984-d6fe4fa8bd9a request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.370 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1bddf2bf-8932-4428-97d7-7342a7ec414b', 'name': 'vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.374 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.376 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}c39d506960fbc5044d0bc54d9594567a78a3d14170701e46780a30eef7979125" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.743 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1850 Content-Type: application/json Date: Tue, 09 Dec 2025 10:50:24 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-ddb7cfc0-30dd-4590-a7cd-6549c406cf02 x-openstack-request-id: req-ddb7cfc0-30dd-4590-a7cd-6549c406cf02 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.743 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f", "name": "test_0", "status": "ACTIVE", "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "user_id": "e6d3a937c2a74eb0816d9f63820935e0", "metadata": {}, "hostId": "17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee", "image": {"id": "53d12211-5d5c-4333-b3ee-e3dcf1663767", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/53d12211-5d5c-4333-b3ee-e3dcf1663767"}]}, "flavor": {"id": "cf91b364-8467-4d1e-8c92-f7d1fab99905", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/cf91b364-8467-4d1e-8c92-f7d1fab99905"}]}, "created": "2025-12-09T10:48:38Z", "updated": "2025-12-09T10:48:53Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.250", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c7:65:39"}, {"version": 4, "addr": "192.168.122.226", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c7:65:39"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-09T10:48:53.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.743 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f used request id req-ddb7cfc0-30dd-4590-a7cd-6549c406cf02 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.747 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.747 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.747 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.748 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.749 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T10:50:24.748586) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.756 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 1bddf2bf-8932-4428-97d7-7342a7ec414b / tap7819acf8-da inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.757 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.764 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f / tap2c684388-b6 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.764 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.766 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.767 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.768 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.768 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.768 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.769 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.770 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T10:50:24.769458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.795 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.796 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.796 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.824 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.824 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.825 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.825 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.825 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.825 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.826 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.826 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.826 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.826 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.826 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T10:50:24.826262) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.827 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.827 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.827 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.827 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.827 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.827 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.827 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.828 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.828 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.828 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.828 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.829 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.829 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T10:50:24.827946) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T10:50:24.829198) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:24 compute-0 nova_compute[189493]: 2025-12-09 10:50:24.875 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:50:24 compute-0 nova_compute[189493]: 2025-12-09 10:50:24.881 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.938 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.939 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.939 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:24 compute-0 podman[240696]: 2025-12-09 10:50:24.962684392 +0000 UTC m=+0.121557307 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.033 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.034 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.034 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.035 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.035 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.036 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.036 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.036 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.036 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.037 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T10:50:25.036142) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.037 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.037 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.038 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.038 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 331172565 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T10:50:25.038138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.038 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.039 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 1023978 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.039 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.039 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.040 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.041 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.041 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.041 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.041 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.041 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T10:50:25.041408) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.042 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.042 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.042 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.043 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.044 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.044 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.045 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.045 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.046 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.046 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.046 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.047 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T10:50:25.044620) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.047 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.047 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.048 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.048 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T10:50:25.048234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.076 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/cpu volume: 20610000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.102 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 34520000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.102 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.103 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.103 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.103 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.103 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.104 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.104 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.104 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.105 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.105 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.106 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.107 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.107 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.107 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.108 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.108 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T10:50:25.103744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.108 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.108 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.109 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.109 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.110 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.110 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.111 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.111 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.111 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.112 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.112 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.112 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.112 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.113 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.113 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.114 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.114 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.115 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.115 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.115 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.115 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.115 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.116 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.116 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T10:50:25.108145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.116 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.116 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T10:50:25.112283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.116 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T10:50:25.115933) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.116 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.117 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.117 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.117 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.117 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.117 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.117 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.118 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.118 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T10:50:25.117400) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.118 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.119 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.119 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.119 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.119 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T10:50:25.119052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.120 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.120 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.120 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.120 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.121 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.121 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.121 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.121 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.121 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.121 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T10:50:25.121232) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.122 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.122 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.122 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.122 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.122 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T10:50:25.122360) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.123 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.123 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.123 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.123 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.123 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-09T10:50:25.123279) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.123 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l>, <NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l>, <NovaLikeServer: test_0>]
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.124 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.124 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.124 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.125 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.125 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.125 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T10:50:25.124977) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.126 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.126 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.126 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.126 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.127 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.127 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T10:50:25.125953) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T10:50:25.127215) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.127 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.128 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.128 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.128 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.128 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.128 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.128 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.129 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.129 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.129 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T10:50:25.128483) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.129 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.130 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T10:50:25.130083) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.130 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.130 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.131 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.131 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.131 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.131 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.131 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T10:50:25.131345) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.131 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 1bddf2bf-8932-4428-97d7-7342a7ec414b: ceilometer.compute.pollsters.NoVolumeException
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.132 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.94921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.132 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.132 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.133 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.133 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-09T10:50:25.133011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.133 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l>, <NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l>, <NovaLikeServer: test_0>]
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:50:25 compute-0 nova_compute[189493]: 2025-12-09 10:50:25.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:50:25 compute-0 nova_compute[189493]: 2025-12-09 10:50:25.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:50:26 compute-0 nova_compute[189493]: 2025-12-09 10:50:26.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:50:27 compute-0 nova_compute[189493]: 2025-12-09 10:50:27.047 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:27 compute-0 nova_compute[189493]: 2025-12-09 10:50:27.593 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:27 compute-0 nova_compute[189493]: 2025-12-09 10:50:27.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:50:27 compute-0 nova_compute[189493]: 2025-12-09 10:50:27.879 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:50:27 compute-0 nova_compute[189493]: 2025-12-09 10:50:27.882 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:50:27 compute-0 nova_compute[189493]: 2025-12-09 10:50:27.883 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:50:27 compute-0 nova_compute[189493]: 2025-12-09 10:50:27.883 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:50:27 compute-0 nova_compute[189493]: 2025-12-09 10:50:27.981 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.054 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.055 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.126 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.128 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.186 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.190 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.248 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.255 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.352 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.355 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.421 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.423 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.505 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.512 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.617 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:50:28 compute-0 podman[240742]: 2025-12-09 10:50:28.980590153 +0000 UTC m=+0.128547325 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 09 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.113 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.115 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5106MB free_disk=72.18374252319336GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.115 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.116 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.201 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.202 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.202 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.203 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.261 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.279 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.300 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.301 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:50:29 compute-0 podman[203687]: time="2025-12-09T10:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:50:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:50:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4778 "" "Go-http-client/1.1"
Dec 09 10:50:30 compute-0 nova_compute[189493]: 2025-12-09 10:50:30.303 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:50:30 compute-0 nova_compute[189493]: 2025-12-09 10:50:30.304 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:50:30 compute-0 nova_compute[189493]: 2025-12-09 10:50:30.305 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 10:50:30 compute-0 nova_compute[189493]: 2025-12-09 10:50:30.690 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:50:30 compute-0 nova_compute[189493]: 2025-12-09 10:50:30.690 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:50:30 compute-0 nova_compute[189493]: 2025-12-09 10:50:30.691 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 10:50:30 compute-0 nova_compute[189493]: 2025-12-09 10:50:30.691 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 10:50:31 compute-0 openstack_network_exporter[205823]: ERROR   10:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:50:31 compute-0 openstack_network_exporter[205823]: ERROR   10:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:50:31 compute-0 openstack_network_exporter[205823]: ERROR   10:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:50:31 compute-0 openstack_network_exporter[205823]: ERROR   10:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:50:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:50:31 compute-0 openstack_network_exporter[205823]: ERROR   10:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:50:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:50:31 compute-0 nova_compute[189493]: 2025-12-09 10:50:31.688 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:50:31 compute-0 nova_compute[189493]: 2025-12-09 10:50:31.711 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:50:31 compute-0 nova_compute[189493]: 2025-12-09 10:50:31.712 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 10:50:31 compute-0 nova_compute[189493]: 2025-12-09 10:50:31.714 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:50:31 compute-0 nova_compute[189493]: 2025-12-09 10:50:31.715 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:50:31 compute-0 nova_compute[189493]: 2025-12-09 10:50:31.716 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:50:32 compute-0 nova_compute[189493]: 2025-12-09 10:50:32.050 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:32 compute-0 nova_compute[189493]: 2025-12-09 10:50:32.597 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:32 compute-0 podman[240764]: 2025-12-09 10:50:32.999417429 +0000 UTC m=+0.138858363 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 09 10:50:33 compute-0 ovn_controller[97780]: 2025-12-09T10:50:33Z|00039|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec 09 10:50:34 compute-0 podman[240784]: 2025-12-09 10:50:34.020424706 +0000 UTC m=+0.153148966 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=kepler, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, vcs-type=git, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, managed_by=edpm_ansible, release-0.7.12=, build-date=2024-09-18T21:23:30)
Dec 09 10:50:37 compute-0 podman[240802]: 2025-12-09 10:50:37.003334304 +0000 UTC m=+0.125258916 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 09 10:50:37 compute-0 podman[240803]: 2025-12-09 10:50:37.005441261 +0000 UTC m=+0.132933263 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 09 10:50:37 compute-0 nova_compute[189493]: 2025-12-09 10:50:37.054 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:37 compute-0 nova_compute[189493]: 2025-12-09 10:50:37.602 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:37 compute-0 sshd-session[240838]: Invalid user debian from 159.223.8.217 port 39558
Dec 09 10:50:37 compute-0 sshd-session[240838]: Connection closed by invalid user debian 159.223.8.217 port 39558 [preauth]
Dec 09 10:50:39 compute-0 ovn_controller[97780]: 2025-12-09T10:50:39Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:01:4e:b4 192.168.0.212
Dec 09 10:50:39 compute-0 ovn_controller[97780]: 2025-12-09T10:50:39Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:4e:b4 192.168.0.212
Dec 09 10:50:42 compute-0 podman[240855]: 2025-12-09 10:50:42.005479165 +0000 UTC m=+0.146547109 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350)
Dec 09 10:50:42 compute-0 nova_compute[189493]: 2025-12-09 10:50:42.056 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:42 compute-0 nova_compute[189493]: 2025-12-09 10:50:42.606 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:44 compute-0 podman[240874]: 2025-12-09 10:50:44.006294762 +0000 UTC m=+0.152340455 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 09 10:50:46 compute-0 podman[240900]: 2025-12-09 10:50:46.967738042 +0000 UTC m=+0.095014764 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 10:50:47 compute-0 nova_compute[189493]: 2025-12-09 10:50:47.059 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:47 compute-0 nova_compute[189493]: 2025-12-09 10:50:47.610 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:52 compute-0 nova_compute[189493]: 2025-12-09 10:50:52.062 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:52 compute-0 nova_compute[189493]: 2025-12-09 10:50:52.614 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:55 compute-0 podman[240923]: 2025-12-09 10:50:55.982744446 +0000 UTC m=+0.129773068 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 10:50:57 compute-0 nova_compute[189493]: 2025-12-09 10:50:57.063 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:57 compute-0 nova_compute[189493]: 2025-12-09 10:50:57.616 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:50:59 compute-0 podman[203687]: time="2025-12-09T10:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:50:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:50:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4781 "" "Go-http-client/1.1"
Dec 09 10:50:59 compute-0 podman[240943]: 2025-12-09 10:50:59.974345659 +0000 UTC m=+0.115701400 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 10:51:01 compute-0 anacron[30864]: Job `cron.daily' started
Dec 09 10:51:01 compute-0 anacron[30864]: Job `cron.daily' terminated
Dec 09 10:51:01 compute-0 openstack_network_exporter[205823]: ERROR   10:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:51:01 compute-0 openstack_network_exporter[205823]: ERROR   10:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:51:01 compute-0 openstack_network_exporter[205823]: ERROR   10:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:51:01 compute-0 openstack_network_exporter[205823]: ERROR   10:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:51:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:51:01 compute-0 openstack_network_exporter[205823]: ERROR   10:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:51:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:51:02 compute-0 nova_compute[189493]: 2025-12-09 10:51:02.066 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:02 compute-0 nova_compute[189493]: 2025-12-09 10:51:02.620 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:03 compute-0 podman[240969]: 2025-12-09 10:51:03.968131453 +0000 UTC m=+0.113601314 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec 09 10:51:05 compute-0 podman[240987]: 2025-12-09 10:51:05.007701569 +0000 UTC m=+0.147095684 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, container_name=kepler, io.openshift.expose-services=, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, name=ubi9, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1214.1726694543)
Dec 09 10:51:07 compute-0 nova_compute[189493]: 2025-12-09 10:51:07.070 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:07 compute-0 nova_compute[189493]: 2025-12-09 10:51:07.624 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:07 compute-0 podman[241006]: 2025-12-09 10:51:07.970922309 +0000 UTC m=+0.112503135 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 09 10:51:07 compute-0 podman[241007]: 2025-12-09 10:51:07.999331202 +0000 UTC m=+0.135180734 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 09 10:51:08 compute-0 sshd-session[241043]: Invalid user debian from 159.223.8.217 port 48356
Dec 09 10:51:08 compute-0 sshd-session[241043]: Connection closed by invalid user debian 159.223.8.217 port 48356 [preauth]
Dec 09 10:51:12 compute-0 nova_compute[189493]: 2025-12-09 10:51:12.074 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:12 compute-0 nova_compute[189493]: 2025-12-09 10:51:12.628 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:12 compute-0 podman[241045]: 2025-12-09 10:51:12.954367635 +0000 UTC m=+0.109064332 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7)
Dec 09 10:51:14 compute-0 podman[241065]: 2025-12-09 10:51:14.885022725 +0000 UTC m=+0.181667983 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 09 10:51:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:51:16.979 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:51:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:51:16.979 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:51:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:51:16.980 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:51:17 compute-0 nova_compute[189493]: 2025-12-09 10:51:17.076 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:17 compute-0 nova_compute[189493]: 2025-12-09 10:51:17.633 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:17 compute-0 podman[241090]: 2025-12-09 10:51:17.980957681 +0000 UTC m=+0.118993280 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 10:51:22 compute-0 nova_compute[189493]: 2025-12-09 10:51:22.079 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:22 compute-0 nova_compute[189493]: 2025-12-09 10:51:22.636 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:25 compute-0 nova_compute[189493]: 2025-12-09 10:51:25.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:51:25 compute-0 nova_compute[189493]: 2025-12-09 10:51:25.844 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:51:25 compute-0 nova_compute[189493]: 2025-12-09 10:51:25.845 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:51:26 compute-0 nova_compute[189493]: 2025-12-09 10:51:26.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:51:27 compute-0 podman[241114]: 2025-12-09 10:51:27.008097933 +0000 UTC m=+0.139249613 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Dec 09 10:51:27 compute-0 nova_compute[189493]: 2025-12-09 10:51:27.083 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:27 compute-0 nova_compute[189493]: 2025-12-09 10:51:27.639 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:27 compute-0 nova_compute[189493]: 2025-12-09 10:51:27.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:51:28 compute-0 nova_compute[189493]: 2025-12-09 10:51:28.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:51:28 compute-0 nova_compute[189493]: 2025-12-09 10:51:28.868 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:51:28 compute-0 nova_compute[189493]: 2025-12-09 10:51:28.869 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:51:29 compute-0 nova_compute[189493]: 2025-12-09 10:51:29.682 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:51:29 compute-0 nova_compute[189493]: 2025-12-09 10:51:29.683 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:51:29 compute-0 nova_compute[189493]: 2025-12-09 10:51:29.683 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 10:51:29 compute-0 podman[203687]: time="2025-12-09T10:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:51:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:51:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4784 "" "Go-http-client/1.1"
Dec 09 10:51:30 compute-0 podman[241133]: 2025-12-09 10:51:30.981693974 +0000 UTC m=+0.114493088 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 09 10:51:31 compute-0 openstack_network_exporter[205823]: ERROR   10:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:51:31 compute-0 openstack_network_exporter[205823]: ERROR   10:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:51:31 compute-0 openstack_network_exporter[205823]: ERROR   10:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:51:31 compute-0 openstack_network_exporter[205823]: ERROR   10:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:51:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:51:31 compute-0 openstack_network_exporter[205823]: ERROR   10:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:51:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.716 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.740 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.741 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.741 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.741 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.766 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.767 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.767 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.768 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.871 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.970 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.971 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.067 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.069 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.101 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.174 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.176 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.262 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.278 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.351 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.353 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.420 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.421 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.531 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.533 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.631 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.642 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.231 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.232 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5041MB free_disk=72.16302108764648GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.232 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.233 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.489 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.490 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.490 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.490 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.556 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.577 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.580 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.580 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.682 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.683 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:51:34 compute-0 podman[241182]: 2025-12-09 10:51:34.965134441 +0000 UTC m=+0.110444990 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 09 10:51:35 compute-0 podman[241201]: 2025-12-09 10:51:35.973501068 +0000 UTC m=+0.113928982 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.buildah.version=1.29.0, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, version=9.4, container_name=kepler, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Dec 09 10:51:37 compute-0 nova_compute[189493]: 2025-12-09 10:51:37.090 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:37 compute-0 nova_compute[189493]: 2025-12-09 10:51:37.647 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:38 compute-0 sshd-session[241220]: Invalid user debian from 159.223.8.217 port 39738
Dec 09 10:51:38 compute-0 sshd-session[241220]: Connection closed by invalid user debian 159.223.8.217 port 39738 [preauth]
Dec 09 10:51:38 compute-0 podman[241223]: 2025-12-09 10:51:38.336511959 +0000 UTC m=+0.100782709 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec 09 10:51:38 compute-0 podman[241222]: 2025-12-09 10:51:38.352693424 +0000 UTC m=+0.110180811 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 09 10:51:42 compute-0 nova_compute[189493]: 2025-12-09 10:51:42.094 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:42 compute-0 nova_compute[189493]: 2025-12-09 10:51:42.651 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:43 compute-0 podman[241259]: 2025-12-09 10:51:43.957662635 +0000 UTC m=+0.107100950 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec 09 10:51:46 compute-0 podman[241280]: 2025-12-09 10:51:46.016732216 +0000 UTC m=+0.149034427 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Dec 09 10:51:47 compute-0 nova_compute[189493]: 2025-12-09 10:51:47.097 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:47 compute-0 nova_compute[189493]: 2025-12-09 10:51:47.653 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:48 compute-0 podman[241304]: 2025-12-09 10:51:48.952590649 +0000 UTC m=+0.104507040 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 10:51:52 compute-0 nova_compute[189493]: 2025-12-09 10:51:52.101 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:52 compute-0 nova_compute[189493]: 2025-12-09 10:51:52.656 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:55 compute-0 sshd-session[241279]: error: kex_exchange_identification: read: Connection timed out
Dec 09 10:51:55 compute-0 sshd-session[241279]: banner exchange: Connection from 27.148.182.148 port 53464: Connection timed out
Dec 09 10:51:57 compute-0 nova_compute[189493]: 2025-12-09 10:51:57.103 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:57 compute-0 nova_compute[189493]: 2025-12-09 10:51:57.660 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:51:57 compute-0 podman[241329]: 2025-12-09 10:51:57.996749796 +0000 UTC m=+0.131911855 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:51:59 compute-0 podman[203687]: time="2025-12-09T10:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:51:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:51:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4769 "" "Go-http-client/1.1"
Dec 09 10:52:01 compute-0 openstack_network_exporter[205823]: ERROR   10:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:52:01 compute-0 openstack_network_exporter[205823]: ERROR   10:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:52:01 compute-0 openstack_network_exporter[205823]: ERROR   10:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:52:01 compute-0 openstack_network_exporter[205823]: ERROR   10:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:52:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:52:01 compute-0 openstack_network_exporter[205823]: ERROR   10:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:52:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:52:02 compute-0 podman[241348]: 2025-12-09 10:52:02.000459116 +0000 UTC m=+0.133890029 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 10:52:02 compute-0 nova_compute[189493]: 2025-12-09 10:52:02.105 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:02 compute-0 nova_compute[189493]: 2025-12-09 10:52:02.665 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:04 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 09 10:52:05 compute-0 podman[241373]: 2025-12-09 10:52:05.951322577 +0000 UTC m=+0.111718554 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 09 10:52:07 compute-0 podman[241393]: 2025-12-09 10:52:07.005637549 +0000 UTC m=+0.145409129 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, version=9.4, config_id=edpm, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container)
Dec 09 10:52:07 compute-0 sshd-session[241328]: error: kex_exchange_identification: read: Connection timed out
Dec 09 10:52:07 compute-0 sshd-session[241328]: banner exchange: Connection from 27.148.182.148 port 39886: Connection timed out
Dec 09 10:52:07 compute-0 nova_compute[189493]: 2025-12-09 10:52:07.108 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:07 compute-0 nova_compute[189493]: 2025-12-09 10:52:07.667 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:08 compute-0 sshd-session[241411]: Invalid user dev from 159.223.8.217 port 33912
Dec 09 10:52:08 compute-0 sshd-session[241411]: Connection closed by invalid user dev 159.223.8.217 port 33912 [preauth]
Dec 09 10:52:08 compute-0 podman[241413]: 2025-12-09 10:52:08.818503655 +0000 UTC m=+0.138220085 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 09 10:52:08 compute-0 podman[241414]: 2025-12-09 10:52:08.82315648 +0000 UTC m=+0.126205672 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec 09 10:52:12 compute-0 nova_compute[189493]: 2025-12-09 10:52:12.111 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:12 compute-0 nova_compute[189493]: 2025-12-09 10:52:12.671 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:14 compute-0 podman[241450]: 2025-12-09 10:52:14.81553801 +0000 UTC m=+0.114122708 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-type=git)
Dec 09 10:52:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:52:16.981 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:52:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:52:16.981 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:52:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:52:16.982 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:52:17 compute-0 podman[241470]: 2025-12-09 10:52:17.059153682 +0000 UTC m=+0.195290029 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible)
Dec 09 10:52:17 compute-0 nova_compute[189493]: 2025-12-09 10:52:17.115 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:17 compute-0 nova_compute[189493]: 2025-12-09 10:52:17.675 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:19 compute-0 podman[241498]: 2025-12-09 10:52:19.978126922 +0000 UTC m=+0.118434195 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 10:52:22 compute-0 nova_compute[189493]: 2025-12-09 10:52:22.116 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:22 compute-0 nova_compute[189493]: 2025-12-09 10:52:22.679 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.289 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.290 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.321 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1bddf2bf-8932-4428-97d7-7342a7ec414b', 'name': 'vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.327 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.328 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.328 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T10:52:23.328476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.336 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes volume: 4891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.342 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.343 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.343 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.344 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T10:52:23.344041) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.386 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.386 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.387 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.429 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.430 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.431 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.431 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.431 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.432 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.432 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets volume: 43 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.433 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.434 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.435 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.434 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T10:52:23.432611) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.435 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.436 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T10:52:23.435322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.436 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.436 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.437 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.437 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.437 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.437 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.437 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T10:52:23.437715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.515 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.516 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.516 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.629 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.630 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.630 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.631 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.632 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.632 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.632 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.632 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.633 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.634 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.634 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.634 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.634 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 439593872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.635 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 92612690 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.635 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 59905939 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.635 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.636 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.636 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.637 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.637 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.637 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.637 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.637 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.637 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.638 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.638 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.638 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.639 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.639 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.640 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.640 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.640 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.640 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.640 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.641 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.641 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.641 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.642 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.642 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.642 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.643 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.643 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.643 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.643 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.644 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T10:52:23.632357) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.644 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T10:52:23.634454) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T10:52:23.637804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T10:52:23.640875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T10:52:23.643531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.669 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/cpu volume: 69030000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.698 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 36320000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.699 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.699 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.700 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.700 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.700 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.700 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.701 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.701 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.702 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.702 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.703 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.703 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T10:52:23.700306) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.704 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.705 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.705 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.705 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.706 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 41811968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.706 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.707 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.708 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.708 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.709 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.710 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.711 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.711 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.711 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.711 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 2108717398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.712 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 13222286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.713 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.713 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.714 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.715 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.716 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.716 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.717 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.717 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.718 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.719 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.720 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.720 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes volume: 4864 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.720 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.721 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.722 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.722 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.722 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.723 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.723 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.723 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.724 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.724 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.725 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.726 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.727 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.728 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes.delta volume: 4801 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.729 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.729 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.730 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.730 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.731 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.731 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.732 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.732 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.732 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.732 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.732 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.732 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.733 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.733 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.733 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.733 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.734 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.734 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.734 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.734 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.734 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.734 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.735 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.735 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.735 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.735 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.735 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.735 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.736 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.736 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.736 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.736 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.736 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.736 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.736 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes.delta volume: 4864 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.737 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.737 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.738 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.738 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.738 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/memory.usage volume: 49.13671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.738 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.94921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.738 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.739 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.739 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.739 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.739 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.739 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.739 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.739 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.742 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T10:52:23.705954) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.742 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T10:52:23.711535) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T10:52:23.717413) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T10:52:23.719960) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T10:52:23.723048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T10:52:23.728415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T10:52:23.730599) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T10:52:23.731875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T10:52:23.732896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.745 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T10:52:23.734146) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.745 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T10:52:23.735409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.745 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T10:52:23.736726) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.746 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T10:52:23.738110) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:52:25 compute-0 nova_compute[189493]: 2025-12-09 10:52:25.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:52:25 compute-0 nova_compute[189493]: 2025-12-09 10:52:25.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:52:27 compute-0 nova_compute[189493]: 2025-12-09 10:52:27.119 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:27 compute-0 nova_compute[189493]: 2025-12-09 10:52:27.681 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:27 compute-0 nova_compute[189493]: 2025-12-09 10:52:27.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:52:28 compute-0 nova_compute[189493]: 2025-12-09 10:52:28.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:52:28 compute-0 nova_compute[189493]: 2025-12-09 10:52:28.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:52:28 compute-0 nova_compute[189493]: 2025-12-09 10:52:28.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 10:52:28 compute-0 podman[241528]: 2025-12-09 10:52:28.9754157 +0000 UTC m=+0.118477161 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd)
Dec 09 10:52:29 compute-0 podman[203687]: time="2025-12-09T10:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:52:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:52:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4776 "" "Go-http-client/1.1"
Dec 09 10:52:29 compute-0 nova_compute[189493]: 2025-12-09 10:52:29.926 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:52:29 compute-0 nova_compute[189493]: 2025-12-09 10:52:29.927 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:52:29 compute-0 nova_compute[189493]: 2025-12-09 10:52:29.927 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 10:52:29 compute-0 nova_compute[189493]: 2025-12-09 10:52:29.928 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 10:52:31 compute-0 openstack_network_exporter[205823]: ERROR   10:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:52:31 compute-0 openstack_network_exporter[205823]: ERROR   10:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:52:31 compute-0 openstack_network_exporter[205823]: ERROR   10:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:52:31 compute-0 openstack_network_exporter[205823]: ERROR   10:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:52:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:52:31 compute-0 openstack_network_exporter[205823]: ERROR   10:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:52:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:52:32 compute-0 nova_compute[189493]: 2025-12-09 10:52:32.123 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:32 compute-0 nova_compute[189493]: 2025-12-09 10:52:32.684 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:32 compute-0 podman[241549]: 2025-12-09 10:52:32.981514331 +0000 UTC m=+0.123328276 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.240 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.342 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.343 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.343 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.344 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.344 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.345 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.497 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.499 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.499 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.500 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.603 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.681 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.683 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.760 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.762 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.865 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.867 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.965 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.973 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.068 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.069 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.164 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.165 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.259 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.261 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.321 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.711 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.713 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5035MB free_disk=72.16301727294922GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.713 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.713 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.817 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.818 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.819 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.820 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.899 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.915 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.917 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.917 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:52:35 compute-0 nova_compute[189493]: 2025-12-09 10:52:35.415 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:52:35 compute-0 nova_compute[189493]: 2025-12-09 10:52:35.416 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:52:36 compute-0 podman[241595]: 2025-12-09 10:52:36.972659555 +0000 UTC m=+0.113806680 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 10:52:37 compute-0 nova_compute[189493]: 2025-12-09 10:52:37.127 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:37 compute-0 nova_compute[189493]: 2025-12-09 10:52:37.687 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:37 compute-0 podman[241618]: 2025-12-09 10:52:37.981461837 +0000 UTC m=+0.127934654 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.expose-services=, config_id=edpm, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, architecture=x86_64, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, version=9.4, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 09 10:52:38 compute-0 sshd-session[241616]: Invalid user dev from 159.223.8.217 port 51058
Dec 09 10:52:38 compute-0 sshd-session[241616]: Connection closed by invalid user dev 159.223.8.217 port 51058 [preauth]
Dec 09 10:52:39 compute-0 podman[241637]: 2025-12-09 10:52:39.96715091 +0000 UTC m=+0.119903218 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:52:39 compute-0 podman[241638]: 2025-12-09 10:52:39.993658493 +0000 UTC m=+0.127830022 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec 09 10:52:42 compute-0 nova_compute[189493]: 2025-12-09 10:52:42.131 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:42 compute-0 nova_compute[189493]: 2025-12-09 10:52:42.690 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:45 compute-0 podman[241676]: 2025-12-09 10:52:45.958687477 +0000 UTC m=+0.108592907 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 09 10:52:47 compute-0 nova_compute[189493]: 2025-12-09 10:52:47.136 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:47 compute-0 nova_compute[189493]: 2025-12-09 10:52:47.693 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:48 compute-0 podman[241698]: 2025-12-09 10:52:48.000672089 +0000 UTC m=+0.157088525 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:52:50 compute-0 podman[241724]: 2025-12-09 10:52:50.968279072 +0000 UTC m=+0.121914139 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 10:52:52 compute-0 nova_compute[189493]: 2025-12-09 10:52:52.139 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:52 compute-0 nova_compute[189493]: 2025-12-09 10:52:52.698 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:57 compute-0 nova_compute[189493]: 2025-12-09 10:52:57.144 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:57 compute-0 nova_compute[189493]: 2025-12-09 10:52:57.701 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:52:59 compute-0 podman[203687]: time="2025-12-09T10:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:52:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:52:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4781 "" "Go-http-client/1.1"
Dec 09 10:52:59 compute-0 podman[241747]: 2025-12-09 10:52:59.97669851 +0000 UTC m=+0.114755835 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 09 10:53:01 compute-0 openstack_network_exporter[205823]: ERROR   10:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:53:01 compute-0 openstack_network_exporter[205823]: ERROR   10:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:53:01 compute-0 openstack_network_exporter[205823]: ERROR   10:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:53:01 compute-0 openstack_network_exporter[205823]: ERROR   10:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:53:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:53:01 compute-0 openstack_network_exporter[205823]: ERROR   10:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:53:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:53:02 compute-0 nova_compute[189493]: 2025-12-09 10:53:02.147 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:02 compute-0 nova_compute[189493]: 2025-12-09 10:53:02.705 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:03 compute-0 podman[241768]: 2025-12-09 10:53:03.960730622 +0000 UTC m=+0.110755342 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 10:53:06 compute-0 sshd-session[241791]: Invalid user dev from 159.223.8.217 port 39418
Dec 09 10:53:06 compute-0 sshd-session[241791]: Connection closed by invalid user dev 159.223.8.217 port 39418 [preauth]
Dec 09 10:53:07 compute-0 nova_compute[189493]: 2025-12-09 10:53:07.155 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:07 compute-0 nova_compute[189493]: 2025-12-09 10:53:07.710 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:07 compute-0 podman[241793]: 2025-12-09 10:53:07.980592546 +0000 UTC m=+0.123795648 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:53:08 compute-0 podman[241813]: 2025-12-09 10:53:08.987017647 +0000 UTC m=+0.128318695 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, container_name=kepler, name=ubi9, vcs-type=git, com.redhat.component=ubi9-container, architecture=x86_64, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 09 10:53:10 compute-0 podman[241833]: 2025-12-09 10:53:10.91175091 +0000 UTC m=+0.064363558 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Dec 09 10:53:10 compute-0 podman[241834]: 2025-12-09 10:53:10.948073265 +0000 UTC m=+0.097924462 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 09 10:53:12 compute-0 nova_compute[189493]: 2025-12-09 10:53:12.154 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:12 compute-0 nova_compute[189493]: 2025-12-09 10:53:12.715 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:16 compute-0 podman[241872]: 2025-12-09 10:53:16.964752359 +0000 UTC m=+0.107044406 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, architecture=x86_64, distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Dec 09 10:53:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:53:16.982 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:53:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:53:16.983 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:53:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:53:16.983 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:53:17 compute-0 nova_compute[189493]: 2025-12-09 10:53:17.155 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:17 compute-0 nova_compute[189493]: 2025-12-09 10:53:17.719 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:19 compute-0 podman[241893]: 2025-12-09 10:53:19.000194644 +0000 UTC m=+0.140688014 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:53:21 compute-0 podman[241920]: 2025-12-09 10:53:21.940217546 +0000 UTC m=+0.096938176 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 10:53:22 compute-0 nova_compute[189493]: 2025-12-09 10:53:22.159 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:22 compute-0 nova_compute[189493]: 2025-12-09 10:53:22.723 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:25 compute-0 nova_compute[189493]: 2025-12-09 10:53:25.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:53:26 compute-0 nova_compute[189493]: 2025-12-09 10:53:26.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:53:27 compute-0 nova_compute[189493]: 2025-12-09 10:53:27.168 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:27 compute-0 nova_compute[189493]: 2025-12-09 10:53:27.727 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:27 compute-0 nova_compute[189493]: 2025-12-09 10:53:27.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:53:29 compute-0 podman[203687]: time="2025-12-09T10:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:53:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:53:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4775 "" "Go-http-client/1.1"
Dec 09 10:53:29 compute-0 nova_compute[189493]: 2025-12-09 10:53:29.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:53:29 compute-0 nova_compute[189493]: 2025-12-09 10:53:29.844 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:53:30 compute-0 podman[241941]: 2025-12-09 10:53:30.931599006 +0000 UTC m=+0.087223427 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3)
Dec 09 10:53:30 compute-0 nova_compute[189493]: 2025-12-09 10:53:30.993 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:53:30 compute-0 nova_compute[189493]: 2025-12-09 10:53:30.994 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:53:30 compute-0 nova_compute[189493]: 2025-12-09 10:53:30.994 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 10:53:31 compute-0 openstack_network_exporter[205823]: ERROR   10:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:53:31 compute-0 openstack_network_exporter[205823]: ERROR   10:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:53:31 compute-0 openstack_network_exporter[205823]: ERROR   10:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:53:31 compute-0 openstack_network_exporter[205823]: ERROR   10:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:53:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:53:31 compute-0 openstack_network_exporter[205823]: ERROR   10:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:53:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:53:32 compute-0 nova_compute[189493]: 2025-12-09 10:53:32.169 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:32 compute-0 nova_compute[189493]: 2025-12-09 10:53:32.733 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.169 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.186 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.187 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.188 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.188 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.189 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.190 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.220 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.221 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.222 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.222 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.311 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:53:34 compute-0 sshd-session[241960]: Invalid user dev from 159.223.8.217 port 34998
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.417 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.419 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:53:34 compute-0 sshd-session[241960]: Connection closed by invalid user dev 159.223.8.217 port 34998 [preauth]
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.489 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.491 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:53:34 compute-0 podman[241963]: 2025-12-09 10:53:34.502536701 +0000 UTC m=+0.148110163 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.558 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.559 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.632 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.642 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.720 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.721 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.782 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.783 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.846 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.847 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.904 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.307 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.309 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5050MB free_disk=72.16301727294922GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.309 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.309 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.601 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.602 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.602 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.603 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.673 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.692 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.694 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.695 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:53:36 compute-0 nova_compute[189493]: 2025-12-09 10:53:36.347 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:53:36 compute-0 nova_compute[189493]: 2025-12-09 10:53:36.383 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:53:36 compute-0 nova_compute[189493]: 2025-12-09 10:53:36.383 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:53:37 compute-0 nova_compute[189493]: 2025-12-09 10:53:37.172 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:37 compute-0 nova_compute[189493]: 2025-12-09 10:53:37.737 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:38 compute-0 podman[242008]: 2025-12-09 10:53:38.953940476 +0000 UTC m=+0.103610578 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 09 10:53:39 compute-0 podman[242028]: 2025-12-09 10:53:39.953456269 +0000 UTC m=+0.108833403 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, name=ubi9, io.buildah.version=1.29.0, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.component=ubi9-container, config_id=edpm, release=1214.1726694543)
Dec 09 10:53:41 compute-0 podman[242048]: 2025-12-09 10:53:41.97748999 +0000 UTC m=+0.116121041 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 09 10:53:41 compute-0 podman[242049]: 2025-12-09 10:53:41.977581782 +0000 UTC m=+0.115231037 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 09 10:53:42 compute-0 nova_compute[189493]: 2025-12-09 10:53:42.177 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:42 compute-0 nova_compute[189493]: 2025-12-09 10:53:42.741 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:47 compute-0 nova_compute[189493]: 2025-12-09 10:53:47.180 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:47 compute-0 nova_compute[189493]: 2025-12-09 10:53:47.747 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:48 compute-0 podman[242082]: 2025-12-09 10:53:48.000560518 +0000 UTC m=+0.133125799 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 09 10:53:50 compute-0 podman[242104]: 2025-12-09 10:53:50.026589989 +0000 UTC m=+0.176858674 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 09 10:53:52 compute-0 nova_compute[189493]: 2025-12-09 10:53:52.184 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:52 compute-0 nova_compute[189493]: 2025-12-09 10:53:52.751 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:53 compute-0 podman[242131]: 2025-12-09 10:53:53.002183608 +0000 UTC m=+0.134490414 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 10:53:57 compute-0 nova_compute[189493]: 2025-12-09 10:53:57.190 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:57 compute-0 nova_compute[189493]: 2025-12-09 10:53:57.753 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:53:59 compute-0 podman[203687]: time="2025-12-09T10:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:53:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:53:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4778 "" "Go-http-client/1.1"
Dec 09 10:54:01 compute-0 openstack_network_exporter[205823]: ERROR   10:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:54:01 compute-0 openstack_network_exporter[205823]: ERROR   10:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:54:01 compute-0 openstack_network_exporter[205823]: ERROR   10:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:54:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:54:01 compute-0 openstack_network_exporter[205823]: ERROR   10:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:54:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:54:01 compute-0 openstack_network_exporter[205823]: ERROR   10:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:54:01 compute-0 podman[242155]: 2025-12-09 10:54:01.969862667 +0000 UTC m=+0.120786861 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 09 10:54:02 compute-0 nova_compute[189493]: 2025-12-09 10:54:02.193 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:02 compute-0 sshd-session[242174]: Invalid user dev from 159.223.8.217 port 35392
Dec 09 10:54:02 compute-0 sshd-session[242174]: Connection closed by invalid user dev 159.223.8.217 port 35392 [preauth]
Dec 09 10:54:02 compute-0 nova_compute[189493]: 2025-12-09 10:54:02.756 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:04 compute-0 podman[242176]: 2025-12-09 10:54:04.91260148 +0000 UTC m=+0.073160814 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 09 10:54:07 compute-0 nova_compute[189493]: 2025-12-09 10:54:07.194 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:07 compute-0 nova_compute[189493]: 2025-12-09 10:54:07.760 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:09 compute-0 podman[242198]: 2025-12-09 10:54:09.951311536 +0000 UTC m=+0.099607426 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Dec 09 10:54:10 compute-0 podman[242218]: 2025-12-09 10:54:10.936031868 +0000 UTC m=+0.084855306 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., release-0.7.12=, io.openshift.tags=base rhel9, config_id=edpm, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public)
Dec 09 10:54:12 compute-0 nova_compute[189493]: 2025-12-09 10:54:12.199 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:12 compute-0 nova_compute[189493]: 2025-12-09 10:54:12.765 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:12 compute-0 podman[242238]: 2025-12-09 10:54:12.928518595 +0000 UTC m=+0.076774047 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:54:12 compute-0 podman[242239]: 2025-12-09 10:54:12.98695318 +0000 UTC m=+0.126983160 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4)
Dec 09 10:54:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:54:16.984 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:54:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:54:16.985 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:54:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:54:16.987 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:54:17 compute-0 nova_compute[189493]: 2025-12-09 10:54:17.199 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:17 compute-0 nova_compute[189493]: 2025-12-09 10:54:17.774 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:19 compute-0 podman[242273]: 2025-12-09 10:54:19.020341174 +0000 UTC m=+0.159998770 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, version=9.6, distribution-scope=public, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, com.redhat.component=ubi9-minimal-container)
Dec 09 10:54:20 compute-0 podman[242295]: 2025-12-09 10:54:20.976607149 +0000 UTC m=+0.130428478 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 09 10:54:22 compute-0 nova_compute[189493]: 2025-12-09 10:54:22.204 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:22 compute-0 nova_compute[189493]: 2025-12-09 10:54:22.779 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.290 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.292 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.304 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1bddf2bf-8932-4428-97d7-7342a7ec414b', 'name': 'vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.309 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.309 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.309 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.310 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.311 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T10:54:23.310067) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.315 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes volume: 4891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.320 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.320 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.321 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.321 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.321 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.321 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.321 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T10:54:23.321333) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.363 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.364 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.364 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.407 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.408 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.409 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.410 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.411 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.411 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.411 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.411 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.412 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets volume: 44 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T10:54:23.411897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.413 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.414 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.414 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.414 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.415 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.415 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.415 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.416 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.416 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.417 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T10:54:23.415539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.417 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.418 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.418 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.418 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.418 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.419 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.419 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T10:54:23.418951) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.542 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.543 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.544 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.670 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.671 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.672 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.673 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.674 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.674 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.674 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.675 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.675 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.675 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.676 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.677 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.678 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T10:54:23.675234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.678 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.679 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.679 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.679 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 439593872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.680 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 92612690 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.681 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T10:54:23.679439) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.681 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 59905939 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.682 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.682 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.682 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.683 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.684 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.684 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.685 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.685 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.686 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.686 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.687 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.687 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.688 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T10:54:23.684704) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.689 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.689 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.689 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.689 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.690 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.690 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.691 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.691 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.692 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.692 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T10:54:23.689751) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.692 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.693 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.694 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.694 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.694 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.695 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.695 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T10:54:23.694993) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.735 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/cpu volume: 188690000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.775 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 38210000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.776 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.776 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.776 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.777 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.777 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.777 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.777 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T10:54:23.777180) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.778 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.778 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.779 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.779 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.780 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.781 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.781 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.781 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.781 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.782 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.782 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.782 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.783 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.783 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T10:54:23.781960) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.784 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.784 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.785 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.786 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.786 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.787 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.787 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.787 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.787 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T10:54:23.787455) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.788 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 2118298266 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.788 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 13222286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.789 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.789 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.790 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.790 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.791 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.792 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.792 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.792 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.792 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.792 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.793 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.794 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.795 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.795 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T10:54:23.792580) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.795 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.795 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.795 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.796 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes volume: 4934 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.796 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T10:54:23.795884) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.796 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.797 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.798 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.798 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.798 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.798 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.799 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.799 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.800 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.800 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.801 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.801 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.802 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.803 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T10:54:23.798682) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.803 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.804 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.804 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.804 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.804 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.805 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T10:54:23.804490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.805 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.806 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.806 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.806 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.806 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.806 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.807 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.808 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.808 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.809 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.809 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.809 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.809 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.810 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.811 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.811 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.811 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.811 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.812 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.812 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.812 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.813 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.813 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.813 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.813 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.813 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.813 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.813 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.814 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.814 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.814 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.814 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.814 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.815 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.815 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.815 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.815 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.815 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.816 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.816 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.816 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.816 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.816 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.816 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.817 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.817 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.817 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.817 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.817 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.817 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.817 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/memory.usage volume: 49.12890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.818 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.94921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.818 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.818 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.818 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.819 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.819 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.819 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.819 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.820 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.820 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.820 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.820 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.820 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.822 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T10:54:23.807080) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.822 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T10:54:23.809883) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.822 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T10:54:23.811689) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.822 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T10:54:23.813412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T10:54:23.815032) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T10:54:23.816342) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T10:54:23.817644) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.821 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.823 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.825 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.825 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.825 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.825 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.825 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.826 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.826 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.826 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.826 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.826 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:54:23 compute-0 podman[242322]: 2025-12-09 10:54:23.927029189 +0000 UTC m=+0.078910892 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 10:54:25 compute-0 nova_compute[189493]: 2025-12-09 10:54:25.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:54:26 compute-0 nova_compute[189493]: 2025-12-09 10:54:26.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:54:27 compute-0 nova_compute[189493]: 2025-12-09 10:54:27.209 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:27 compute-0 nova_compute[189493]: 2025-12-09 10:54:27.783 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:28 compute-0 nova_compute[189493]: 2025-12-09 10:54:28.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:54:29 compute-0 podman[203687]: time="2025-12-09T10:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:54:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:54:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4781 "" "Go-http-client/1.1"
Dec 09 10:54:29 compute-0 nova_compute[189493]: 2025-12-09 10:54:29.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:54:30 compute-0 nova_compute[189493]: 2025-12-09 10:54:30.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:54:30 compute-0 nova_compute[189493]: 2025-12-09 10:54:30.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:54:30 compute-0 nova_compute[189493]: 2025-12-09 10:54:30.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 10:54:31 compute-0 sshd-session[242344]: Invalid user dev from 159.223.8.217 port 35906
Dec 09 10:54:31 compute-0 sshd-session[242344]: Connection closed by invalid user dev 159.223.8.217 port 35906 [preauth]
Dec 09 10:54:31 compute-0 openstack_network_exporter[205823]: ERROR   10:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:54:31 compute-0 openstack_network_exporter[205823]: ERROR   10:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:54:31 compute-0 openstack_network_exporter[205823]: ERROR   10:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:54:31 compute-0 openstack_network_exporter[205823]: ERROR   10:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:54:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:54:31 compute-0 openstack_network_exporter[205823]: ERROR   10:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:54:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:54:32 compute-0 nova_compute[189493]: 2025-12-09 10:54:32.007 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:54:32 compute-0 nova_compute[189493]: 2025-12-09 10:54:32.008 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:54:32 compute-0 nova_compute[189493]: 2025-12-09 10:54:32.009 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 10:54:32 compute-0 nova_compute[189493]: 2025-12-09 10:54:32.010 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 10:54:32 compute-0 nova_compute[189493]: 2025-12-09 10:54:32.210 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:32 compute-0 nova_compute[189493]: 2025-12-09 10:54:32.788 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:33 compute-0 podman[242346]: 2025-12-09 10:54:33.007836062 +0000 UTC m=+0.156169882 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3)
Dec 09 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.722 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.741 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.742 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.743 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.744 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.745 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.786 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.786 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.787 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.788 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.881 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.985 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.987 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.086 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.088 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.157 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.158 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.257 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.266 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.335 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.337 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.430 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.431 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.492 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.494 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.563 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:54:35 compute-0 podman[242390]: 2025-12-09 10:54:35.920223514 +0000 UTC m=+0.076136493 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.984 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.986 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5046MB free_disk=72.16311645507812GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.986 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.986 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.366 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.367 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.367 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.367 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.417 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing inventories for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.485 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating ProviderTree inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.485 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.507 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing aggregate associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.531 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing trait associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX2,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.594 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.615 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.618 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.619 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.620 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.620 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.635 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.635 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.636 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 09 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.651 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:54:37 compute-0 nova_compute[189493]: 2025-12-09 10:54:37.212 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:37 compute-0 nova_compute[189493]: 2025-12-09 10:54:37.757 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:54:37 compute-0 nova_compute[189493]: 2025-12-09 10:54:37.758 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:54:37 compute-0 nova_compute[189493]: 2025-12-09 10:54:37.792 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:40 compute-0 podman[242411]: 2025-12-09 10:54:40.972439053 +0000 UTC m=+0.125525276 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:54:41 compute-0 podman[242431]: 2025-12-09 10:54:41.122933486 +0000 UTC m=+0.105778203 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.expose-services=, version=9.4, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=base rhel9, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec 09 10:54:42 compute-0 nova_compute[189493]: 2025-12-09 10:54:42.215 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:42 compute-0 nova_compute[189493]: 2025-12-09 10:54:42.794 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:43 compute-0 podman[242452]: 2025-12-09 10:54:43.983584501 +0000 UTC m=+0.124186557 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 10:54:44 compute-0 podman[242451]: 2025-12-09 10:54:44.003404396 +0000 UTC m=+0.145191866 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:54:47 compute-0 nova_compute[189493]: 2025-12-09 10:54:47.221 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:47 compute-0 nova_compute[189493]: 2025-12-09 10:54:47.797 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:49 compute-0 podman[242489]: 2025-12-09 10:54:49.952308579 +0000 UTC m=+0.108953132 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.6)
Dec 09 10:54:52 compute-0 podman[242510]: 2025-12-09 10:54:52.048089601 +0000 UTC m=+0.182566313 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 09 10:54:52 compute-0 nova_compute[189493]: 2025-12-09 10:54:52.221 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:52 compute-0 nova_compute[189493]: 2025-12-09 10:54:52.800 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:54 compute-0 podman[242535]: 2025-12-09 10:54:54.942492041 +0000 UTC m=+0.097901282 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 10:54:57 compute-0 nova_compute[189493]: 2025-12-09 10:54:57.224 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:57 compute-0 nova_compute[189493]: 2025-12-09 10:54:57.804 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:54:59 compute-0 podman[203687]: time="2025-12-09T10:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:54:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:54:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4775 "" "Go-http-client/1.1"
Dec 09 10:55:00 compute-0 sshd-session[242559]: Invalid user dev from 159.223.8.217 port 46706
Dec 09 10:55:00 compute-0 sshd-session[242559]: Connection closed by invalid user dev 159.223.8.217 port 46706 [preauth]
Dec 09 10:55:01 compute-0 openstack_network_exporter[205823]: ERROR   10:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:55:01 compute-0 openstack_network_exporter[205823]: ERROR   10:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:55:01 compute-0 openstack_network_exporter[205823]: ERROR   10:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:55:01 compute-0 openstack_network_exporter[205823]: ERROR   10:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:55:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:55:01 compute-0 openstack_network_exporter[205823]: ERROR   10:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:55:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:55:02 compute-0 nova_compute[189493]: 2025-12-09 10:55:02.225 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:02 compute-0 nova_compute[189493]: 2025-12-09 10:55:02.809 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:03 compute-0 podman[242561]: 2025-12-09 10:55:03.960119473 +0000 UTC m=+0.116204634 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:55:06 compute-0 podman[242581]: 2025-12-09 10:55:06.963660829 +0000 UTC m=+0.114251070 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 09 10:55:07 compute-0 nova_compute[189493]: 2025-12-09 10:55:07.228 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:07 compute-0 nova_compute[189493]: 2025-12-09 10:55:07.813 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:11 compute-0 podman[242606]: 2025-12-09 10:55:11.972322128 +0000 UTC m=+0.108197160 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 09 10:55:11 compute-0 podman[242605]: 2025-12-09 10:55:11.989207621 +0000 UTC m=+0.125968887 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, managed_by=edpm_ansible, architecture=x86_64, container_name=kepler, io.openshift.expose-services=, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, distribution-scope=public, version=9.4, config_id=edpm, release=1214.1726694543)
Dec 09 10:55:12 compute-0 nova_compute[189493]: 2025-12-09 10:55:12.233 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:12 compute-0 nova_compute[189493]: 2025-12-09 10:55:12.816 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:14 compute-0 podman[242641]: 2025-12-09 10:55:14.934458904 +0000 UTC m=+0.081801470 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 09 10:55:14 compute-0 podman[242642]: 2025-12-09 10:55:14.939896167 +0000 UTC m=+0.081298017 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 09 10:55:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:16.986 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:55:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:16.987 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:55:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:16.987 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:55:17 compute-0 nova_compute[189493]: 2025-12-09 10:55:17.233 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:17 compute-0 nova_compute[189493]: 2025-12-09 10:55:17.819 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:20 compute-0 podman[242676]: 2025-12-09 10:55:20.93713281 +0000 UTC m=+0.083082507 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc.)
Dec 09 10:55:22 compute-0 nova_compute[189493]: 2025-12-09 10:55:22.236 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:22 compute-0 nova_compute[189493]: 2025-12-09 10:55:22.821 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:22 compute-0 podman[242696]: 2025-12-09 10:55:22.993012436 +0000 UTC m=+0.144328552 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 09 10:55:25 compute-0 podman[242722]: 2025-12-09 10:55:25.929688459 +0000 UTC m=+0.077731908 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 10:55:26 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:26.967 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 10:55:26 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:26.969 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 09 10:55:26 compute-0 nova_compute[189493]: 2025-12-09 10:55:26.973 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:27 compute-0 nova_compute[189493]: 2025-12-09 10:55:27.239 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:27 compute-0 nova_compute[189493]: 2025-12-09 10:55:27.824 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:27 compute-0 nova_compute[189493]: 2025-12-09 10:55:27.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:55:27 compute-0 nova_compute[189493]: 2025-12-09 10:55:27.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:55:29 compute-0 podman[203687]: time="2025-12-09T10:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:55:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:55:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Dec 09 10:55:29 compute-0 nova_compute[189493]: 2025-12-09 10:55:29.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:55:29 compute-0 nova_compute[189493]: 2025-12-09 10:55:29.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:55:29 compute-0 sshd-session[242746]: Invalid user dev from 159.223.8.217 port 47474
Dec 09 10:55:30 compute-0 sshd-session[242746]: Connection closed by invalid user dev 159.223.8.217 port 47474 [preauth]
Dec 09 10:55:30 compute-0 nova_compute[189493]: 2025-12-09 10:55:30.838 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:55:30 compute-0 nova_compute[189493]: 2025-12-09 10:55:30.893 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:55:30 compute-0 nova_compute[189493]: 2025-12-09 10:55:30.895 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:55:31 compute-0 openstack_network_exporter[205823]: ERROR   10:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:55:31 compute-0 openstack_network_exporter[205823]: ERROR   10:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:55:31 compute-0 openstack_network_exporter[205823]: ERROR   10:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:55:31 compute-0 openstack_network_exporter[205823]: ERROR   10:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:55:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:55:31 compute-0 openstack_network_exporter[205823]: ERROR   10:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:55:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:55:31 compute-0 nova_compute[189493]: 2025-12-09 10:55:31.550 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:55:31 compute-0 nova_compute[189493]: 2025-12-09 10:55:31.551 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:55:31 compute-0 nova_compute[189493]: 2025-12-09 10:55:31.552 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 10:55:32 compute-0 nova_compute[189493]: 2025-12-09 10:55:32.241 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:32 compute-0 nova_compute[189493]: 2025-12-09 10:55:32.829 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.167 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.170 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.173 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.202 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.203 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.204 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.205 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.210 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.307 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.309 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.324 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.325 189497 INFO nova.compute.claims [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Claim successful on node compute-0.ctlplane.example.com
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.514 189497 DEBUG nova.compute.provider_tree [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.538 189497 DEBUG nova.scheduler.client.report [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.569 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.570 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.626 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.627 189497 DEBUG nova.network.neutron [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.656 189497 INFO nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.698 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.788 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.801 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.802 189497 INFO nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Creating image(s)
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.804 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.805 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.807 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.833 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.859 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.892 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.893 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.894 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.895 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.936 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.938 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.940 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.961 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.048 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.062 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798,backing_fmt=raw /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.095 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.131 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798,backing_fmt=raw /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk 1073741824" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.132 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.133 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.189 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.192 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.219 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.222 189497 DEBUG nova.virt.disk.api [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Checking if we can resize image /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.223 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.287 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.288 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.314 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.325 189497 DEBUG nova.virt.disk.api [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Cannot resize image /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.326 189497 DEBUG nova.objects.instance [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'migration_context' on Instance uuid 32dd7fb0-7003-48cc-b688-4b94946c911f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.359 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.360 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.363 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.390 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.391 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.418 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.481 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.494 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.516 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.518 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.520 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.543 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.572 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.574 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.604 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.607 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.645 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.647 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.676 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 1073741824" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.678 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.680 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.742 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.744 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.764 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.766 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.767 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Ensure instance console log exists: /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.769 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.770 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.771 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.810 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:34 compute-0 podman[242800]: 2025-12-09 10:55:34.963254128 +0000 UTC m=+0.116235415 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.232 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.243 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5041MB free_disk=72.16319274902344GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.244 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.244 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.324 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.325 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.325 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 32dd7fb0-7003-48cc-b688-4b94946c911f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.326 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.326 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.440 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.489 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.529 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.531 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.287s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:55:35 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:35.973 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:55:36 compute-0 nova_compute[189493]: 2025-12-09 10:55:36.759 189497 DEBUG nova.network.neutron [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Successfully updated port: d6164edf-adb9-4fa5-9e6d-bae85d8af633 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 09 10:55:36 compute-0 nova_compute[189493]: 2025-12-09 10:55:36.778 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:55:36 compute-0 nova_compute[189493]: 2025-12-09 10:55:36.778 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquired lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:55:36 compute-0 nova_compute[189493]: 2025-12-09 10:55:36.779 189497 DEBUG nova.network.neutron [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 09 10:55:36 compute-0 nova_compute[189493]: 2025-12-09 10:55:36.885 189497 DEBUG nova.compute.manager [req-0ab7bb34-8f2c-41f0-8bf9-ada69ced9192 req-7ed35e7a-1154-4696-b477-9aa4bd2ff162 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received event network-changed-d6164edf-adb9-4fa5-9e6d-bae85d8af633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 10:55:36 compute-0 nova_compute[189493]: 2025-12-09 10:55:36.886 189497 DEBUG nova.compute.manager [req-0ab7bb34-8f2c-41f0-8bf9-ada69ced9192 req-7ed35e7a-1154-4696-b477-9aa4bd2ff162 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Refreshing instance network info cache due to event network-changed-d6164edf-adb9-4fa5-9e6d-bae85d8af633. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 09 10:55:36 compute-0 nova_compute[189493]: 2025-12-09 10:55:36.887 189497 DEBUG oslo_concurrency.lockutils [req-0ab7bb34-8f2c-41f0-8bf9-ada69ced9192 req-7ed35e7a-1154-4696-b477-9aa4bd2ff162 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:55:36 compute-0 nova_compute[189493]: 2025-12-09 10:55:36.942 189497 DEBUG nova.network.neutron [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 09 10:55:37 compute-0 nova_compute[189493]: 2025-12-09 10:55:37.243 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:37 compute-0 nova_compute[189493]: 2025-12-09 10:55:37.832 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:37 compute-0 podman[242820]: 2025-12-09 10:55:37.907819702 +0000 UTC m=+0.065617087 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.514 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.514 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.774 189497 DEBUG nova.network.neutron [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updating instance_info_cache with network_info: [{"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.812 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Releasing lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.813 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Instance network_info: |[{"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.815 189497 DEBUG oslo_concurrency.lockutils [req-0ab7bb34-8f2c-41f0-8bf9-ada69ced9192 req-7ed35e7a-1154-4696-b477-9aa4bd2ff162 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquired lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.816 189497 DEBUG nova.network.neutron [req-0ab7bb34-8f2c-41f0-8bf9-ada69ced9192 req-7ed35e7a-1154-4696-b477-9aa4bd2ff162 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Refreshing network info cache for port d6164edf-adb9-4fa5-9e6d-bae85d8af633 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.823 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Start _get_guest_xml network_info=[{"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T10:47:15Z,direct_url=<?>,disk_format='qcow2',id=53d12211-5d5c-4333-b3ee-e3dcf1663767,min_disk=0,min_ram=0,name='cirros',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T10:47:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'size': 0, 'image_id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}], 'ephemerals': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'device_type': 'disk', 'guest_format': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.836 189497 WARNING nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.855 189497 DEBUG nova.virt.libvirt.host [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.856 189497 DEBUG nova.virt.libvirt.host [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.868 189497 DEBUG nova.virt.libvirt.host [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.869 189497 DEBUG nova.virt.libvirt.host [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.871 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.872 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-09T10:47:21Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='cf91b364-8467-4d1e-8c92-f7d1fab99905',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T10:47:15Z,direct_url=<?>,disk_format='qcow2',id=53d12211-5d5c-4333-b3ee-e3dcf1663767,min_disk=0,min_ram=0,name='cirros',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T10:47:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.873 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.875 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.876 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.877 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.878 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.880 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.881 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.882 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.883 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.884 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.892 189497 DEBUG nova.virt.libvirt.vif [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-09T10:55:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y',id=3,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-8nh5c9bf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-09T10:55:33Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODI4MTU5Njc0NjczMDMwNjUyND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Dec 09 10:55:38 compute-0 nova_compute[189493]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODI4MTU5Njc0NjczMDMwNjUyND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0tLQo=',user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=32dd7fb0-7003-48cc-b688-4b94946c911f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.893 189497 DEBUG nova.network.os_vif_util [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.895 189497 DEBUG nova.network.os_vif_util [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:9f:5d,bridge_name='br-int',has_traffic_filtering=True,id=d6164edf-adb9-4fa5-9e6d-bae85d8af633,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd6164edf-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.897 189497 DEBUG nova.objects.instance [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 32dd7fb0-7003-48cc-b688-4b94946c911f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.914 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] End _get_guest_xml xml=<domain type="kvm">
Dec 09 10:55:38 compute-0 nova_compute[189493]:   <uuid>32dd7fb0-7003-48cc-b688-4b94946c911f</uuid>
Dec 09 10:55:38 compute-0 nova_compute[189493]:   <name>instance-00000003</name>
Dec 09 10:55:38 compute-0 nova_compute[189493]:   <memory>524288</memory>
Dec 09 10:55:38 compute-0 nova_compute[189493]:   <vcpu>1</vcpu>
Dec 09 10:55:38 compute-0 nova_compute[189493]:   <metadata>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <nova:name>vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y</nova:name>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <nova:creationTime>2025-12-09 10:55:38</nova:creationTime>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <nova:flavor name="m1.small">
Dec 09 10:55:38 compute-0 nova_compute[189493]:         <nova:memory>512</nova:memory>
Dec 09 10:55:38 compute-0 nova_compute[189493]:         <nova:disk>1</nova:disk>
Dec 09 10:55:38 compute-0 nova_compute[189493]:         <nova:swap>0</nova:swap>
Dec 09 10:55:38 compute-0 nova_compute[189493]:         <nova:ephemeral>1</nova:ephemeral>
Dec 09 10:55:38 compute-0 nova_compute[189493]:         <nova:vcpus>1</nova:vcpus>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       </nova:flavor>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <nova:owner>
Dec 09 10:55:38 compute-0 nova_compute[189493]:         <nova:user uuid="e6d3a937c2a74eb0816d9f63820935e0">admin</nova:user>
Dec 09 10:55:38 compute-0 nova_compute[189493]:         <nova:project uuid="736bbfddbeea47e3ac9d863ba120b8f2">admin</nova:project>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       </nova:owner>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <nova:root type="image" uuid="53d12211-5d5c-4333-b3ee-e3dcf1663767"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <nova:ports>
Dec 09 10:55:38 compute-0 nova_compute[189493]:         <nova:port uuid="d6164edf-adb9-4fa5-9e6d-bae85d8af633">
Dec 09 10:55:38 compute-0 nova_compute[189493]:           <nova:ip type="fixed" address="192.168.0.98" ipVersion="4"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:         </nova:port>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       </nova:ports>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     </nova:instance>
Dec 09 10:55:38 compute-0 nova_compute[189493]:   </metadata>
Dec 09 10:55:38 compute-0 nova_compute[189493]:   <sysinfo type="smbios">
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <system>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <entry name="manufacturer">RDO</entry>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <entry name="product">OpenStack Compute</entry>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <entry name="serial">32dd7fb0-7003-48cc-b688-4b94946c911f</entry>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <entry name="uuid">32dd7fb0-7003-48cc-b688-4b94946c911f</entry>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <entry name="family">Virtual Machine</entry>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     </system>
Dec 09 10:55:38 compute-0 nova_compute[189493]:   </sysinfo>
Dec 09 10:55:38 compute-0 nova_compute[189493]:   <os>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <boot dev="hd"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <smbios mode="sysinfo"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:   </os>
Dec 09 10:55:38 compute-0 nova_compute[189493]:   <features>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <acpi/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <apic/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <vmcoreinfo/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:   </features>
Dec 09 10:55:38 compute-0 nova_compute[189493]:   <clock offset="utc">
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <timer name="pit" tickpolicy="delay"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <timer name="hpet" present="no"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:   </clock>
Dec 09 10:55:38 compute-0 nova_compute[189493]:   <cpu mode="host-model" match="exact">
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <topology sockets="1" cores="1" threads="1"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:   </cpu>
Dec 09 10:55:38 compute-0 nova_compute[189493]:   <devices>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <disk type="file" device="disk">
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <source file="/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <target dev="vda" bus="virtio"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     </disk>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <disk type="file" device="disk">
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <source file="/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <target dev="vdb" bus="virtio"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     </disk>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <disk type="file" device="cdrom">
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <driver name="qemu" type="raw" cache="none"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <source file="/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.config"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <target dev="sda" bus="sata"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     </disk>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <interface type="ethernet">
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <mac address="fa:16:3e:83:9f:5d"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <model type="virtio"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <driver name="vhost" rx_queue_size="512"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <mtu size="1442"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <target dev="tapd6164edf-ad"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     </interface>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <serial type="pty">
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <log file="/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/console.log" append="off"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     </serial>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <video>
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <model type="virtio"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     </video>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <input type="tablet" bus="usb"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <rng model="virtio">
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <backend model="random">/dev/urandom</backend>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     </rng>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <controller type="usb" index="0"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     <memballoon model="virtio">
Dec 09 10:55:38 compute-0 nova_compute[189493]:       <stats period="10"/>
Dec 09 10:55:38 compute-0 nova_compute[189493]:     </memballoon>
Dec 09 10:55:38 compute-0 nova_compute[189493]:   </devices>
Dec 09 10:55:38 compute-0 nova_compute[189493]: </domain>
Dec 09 10:55:38 compute-0 nova_compute[189493]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.930 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Preparing to wait for external event network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.931 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.932 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.933 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.935 189497 DEBUG nova.virt.libvirt.vif [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-09T10:55:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y',id=3,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-8nh5c9bf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-09T10:55:33Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODI4MTU5Njc0NjczMDMwNjUyND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Dec 09 10:55:38 compute-0 nova_compute[189493]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODI4MTU5Njc0NjczMDMwNjUyND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0tLQo=',user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=32dd7fb0-7003-48cc-b688-4b94946c911f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.935 189497 DEBUG nova.network.os_vif_util [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.937 189497 DEBUG nova.network.os_vif_util [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:9f:5d,bridge_name='br-int',has_traffic_filtering=True,id=d6164edf-adb9-4fa5-9e6d-bae85d8af633,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd6164edf-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.938 189497 DEBUG os_vif [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:9f:5d,bridge_name='br-int',has_traffic_filtering=True,id=d6164edf-adb9-4fa5-9e6d-bae85d8af633,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd6164edf-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.940 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.940 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.942 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.965 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.966 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6164edf-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.968 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6164edf-ad, col_values=(('external_ids', {'iface-id': 'd6164edf-adb9-4fa5-9e6d-bae85d8af633', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:9f:5d', 'vm-uuid': '32dd7fb0-7003-48cc-b688-4b94946c911f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.971 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:38 compute-0 NetworkManager[56302]: <info>  [1765277738.9745] manager: (tapd6164edf-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.980 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 09 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.993 189497 INFO os_vif [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:9f:5d,bridge_name='br-int',has_traffic_filtering=True,id=d6164edf-adb9-4fa5-9e6d-bae85d8af633,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd6164edf-ad')
Dec 09 10:55:39 compute-0 nova_compute[189493]: 2025-12-09 10:55:39.081 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 09 10:55:39 compute-0 nova_compute[189493]: 2025-12-09 10:55:39.082 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 09 10:55:39 compute-0 rsyslogd[236818]: message too long (8192) with configured size 8096, begin of message is: 2025-12-09 10:55:38.892 189497 DEBUG nova.virt.libvirt.vif [None req-7a1e6ff3-fa [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 10:55:39 compute-0 nova_compute[189493]: 2025-12-09 10:55:39.083 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 09 10:55:39 compute-0 nova_compute[189493]: 2025-12-09 10:55:39.083 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No VIF found with MAC fa:16:3e:83:9f:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 09 10:55:39 compute-0 nova_compute[189493]: 2025-12-09 10:55:39.085 189497 INFO nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Using config drive
Dec 09 10:55:39 compute-0 rsyslogd[236818]: message too long (8192) with configured size 8096, begin of message is: 2025-12-09 10:55:38.935 189497 DEBUG nova.virt.libvirt.vif [None req-7a1e6ff3-fa [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 10:55:39 compute-0 nova_compute[189493]: 2025-12-09 10:55:39.988 189497 INFO nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Creating config drive at /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.config
Dec 09 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.003 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpud_2su99 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.139 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpud_2su99" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:55:40 compute-0 kernel: tapd6164edf-ad: entered promiscuous mode
Dec 09 10:55:40 compute-0 ovn_controller[97780]: 2025-12-09T10:55:40Z|00040|binding|INFO|Claiming lport d6164edf-adb9-4fa5-9e6d-bae85d8af633 for this chassis.
Dec 09 10:55:40 compute-0 ovn_controller[97780]: 2025-12-09T10:55:40Z|00041|binding|INFO|d6164edf-adb9-4fa5-9e6d-bae85d8af633: Claiming fa:16:3e:83:9f:5d 192.168.0.98
Dec 09 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.251 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.258 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:9f:5d 192.168.0.98'], port_security=['fa:16:3e:83:9f:5d 192.168.0.98'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5eiooafn7y6w-fel25ona52mn-zi55qxbdeak4-port-7xvtkga34xqd', 'neutron:cidrs': '192.168.0.98/24', 'neutron:device_id': '32dd7fb0-7003-48cc-b688-4b94946c911f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5af7354-5afe-400a-9e13-5500648117d8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5eiooafn7y6w-fel25ona52mn-zi55qxbdeak4-port-7xvtkga34xqd', 'neutron:project_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd86dfae4-cfd5-480d-a50e-0084326b1439', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61df917c-633f-4b35-857d-39fd859caf35, chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], logical_port=d6164edf-adb9-4fa5-9e6d-bae85d8af633) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.259 106644 INFO neutron.agent.ovn.metadata.agent [-] Port d6164edf-adb9-4fa5-9e6d-bae85d8af633 in datapath c5af7354-5afe-400a-9e13-5500648117d8 bound to our chassis
Dec 09 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.261 106644 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5af7354-5afe-400a-9e13-5500648117d8
Dec 09 10:55:40 compute-0 NetworkManager[56302]: <info>  [1765277740.2689] manager: (tapd6164edf-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Dec 09 10:55:40 compute-0 ovn_controller[97780]: 2025-12-09T10:55:40Z|00042|binding|INFO|Setting lport d6164edf-adb9-4fa5-9e6d-bae85d8af633 ovn-installed in OVS
Dec 09 10:55:40 compute-0 ovn_controller[97780]: 2025-12-09T10:55:40Z|00043|binding|INFO|Setting lport d6164edf-adb9-4fa5-9e6d-bae85d8af633 up in Southbound
Dec 09 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.272 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.275 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.284 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd10290-0d9b-40ce-b443-2684353b0bb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:55:40 compute-0 systemd-machined[155790]: New machine qemu-3-instance-00000003.
Dec 09 10:55:40 compute-0 systemd-udevd[242866]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 10:55:40 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Dec 09 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.328 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[6c5a629f-a943-42c9-a2e7-92ec02402697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:55:40 compute-0 NetworkManager[56302]: <info>  [1765277740.3310] device (tapd6164edf-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 09 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.331 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[fa768375-0040-49c4-bf84-17be1f1bfeb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:55:40 compute-0 NetworkManager[56302]: <info>  [1765277740.3432] device (tapd6164edf-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 09 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.365 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[86a9390e-830f-4760-b239-f92dcd518627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.388 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad11e07-4e31-435a-82c8-7a4ddbd68c09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5af7354-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:0d:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396027, 'reachable_time': 33701, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242875, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.410 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[0040de22-326d-4d0c-97fa-6f122e283db4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396043, 'tstamp': 396043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242878, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396046, 'tstamp': 396046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242878, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.413 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5af7354-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.416 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.419 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.420 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5af7354-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.421 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 09 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.421 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5af7354-50, col_values=(('external_ids', {'iface-id': '3eb47070-bc26-4827-a5a8-68152f05129c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.422 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 09 10:55:40 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 09 10:55:40 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 09 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.942 189497 DEBUG nova.compute.manager [req-0c3631f5-0cce-49c4-952f-e021d46461be req-32dbe9cf-8271-4294-be16-9094239f64c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received event network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.943 189497 DEBUG oslo_concurrency.lockutils [req-0c3631f5-0cce-49c4-952f-e021d46461be req-32dbe9cf-8271-4294-be16-9094239f64c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.943 189497 DEBUG oslo_concurrency.lockutils [req-0c3631f5-0cce-49c4-952f-e021d46461be req-32dbe9cf-8271-4294-be16-9094239f64c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.943 189497 DEBUG oslo_concurrency.lockutils [req-0c3631f5-0cce-49c4-952f-e021d46461be req-32dbe9cf-8271-4294-be16-9094239f64c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.943 189497 DEBUG nova.compute.manager [req-0c3631f5-0cce-49c4-952f-e021d46461be req-32dbe9cf-8271-4294-be16-9094239f64c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Processing event network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.019 189497 DEBUG nova.network.neutron [req-0ab7bb34-8f2c-41f0-8bf9-ada69ced9192 req-7ed35e7a-1154-4696-b477-9aa4bd2ff162 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updated VIF entry in instance network info cache for port d6164edf-adb9-4fa5-9e6d-bae85d8af633. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.020 189497 DEBUG nova.network.neutron [req-0ab7bb34-8f2c-41f0-8bf9-ada69ced9192 req-7ed35e7a-1154-4696-b477-9aa4bd2ff162 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updating instance_info_cache with network_info: [{"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.044 189497 DEBUG oslo_concurrency.lockutils [req-0ab7bb34-8f2c-41f0-8bf9-ada69ced9192 req-7ed35e7a-1154-4696-b477-9aa4bd2ff162 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Releasing lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.100 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.101 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277741.1012912, 32dd7fb0-7003-48cc-b688-4b94946c911f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.102 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] VM Started (Lifecycle Event)
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.109 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.115 189497 INFO nova.virt.libvirt.driver [-] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Instance spawned successfully.
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.115 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.129 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.140 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.147 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.147 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.148 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.148 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.149 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.149 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.161 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.161 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277741.1014433, 32dd7fb0-7003-48cc-b688-4b94946c911f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.161 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] VM Paused (Lifecycle Event)
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.186 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.205 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277741.106694, 32dd7fb0-7003-48cc-b688-4b94946c911f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.205 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] VM Resumed (Lifecycle Event)
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.689 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.694 189497 INFO nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Took 7.89 seconds to spawn the instance on the hypervisor.
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.695 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.701 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.736 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.777 189497 INFO nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Took 8.51 seconds to build instance.
Dec 09 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.800 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:55:42 compute-0 nova_compute[189493]: 2025-12-09 10:55:42.248 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:42 compute-0 podman[242906]: 2025-12-09 10:55:42.964945559 +0000 UTC m=+0.103106028 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, version=9.4, build-date=2024-09-18T21:23:30, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, architecture=x86_64)
Dec 09 10:55:42 compute-0 podman[242907]: 2025-12-09 10:55:42.992180981 +0000 UTC m=+0.123137918 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 09 10:55:43 compute-0 nova_compute[189493]: 2025-12-09 10:55:43.973 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:45 compute-0 nova_compute[189493]: 2025-12-09 10:55:45.127 189497 DEBUG nova.compute.manager [req-01bada05-27ef-4a0c-85d3-3f88381ed5c5 req-b2d3368f-9dd2-4c90-945e-5385778c9214 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received event network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 10:55:45 compute-0 nova_compute[189493]: 2025-12-09 10:55:45.130 189497 DEBUG oslo_concurrency.lockutils [req-01bada05-27ef-4a0c-85d3-3f88381ed5c5 req-b2d3368f-9dd2-4c90-945e-5385778c9214 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:55:45 compute-0 nova_compute[189493]: 2025-12-09 10:55:45.131 189497 DEBUG oslo_concurrency.lockutils [req-01bada05-27ef-4a0c-85d3-3f88381ed5c5 req-b2d3368f-9dd2-4c90-945e-5385778c9214 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:55:45 compute-0 nova_compute[189493]: 2025-12-09 10:55:45.132 189497 DEBUG oslo_concurrency.lockutils [req-01bada05-27ef-4a0c-85d3-3f88381ed5c5 req-b2d3368f-9dd2-4c90-945e-5385778c9214 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:55:45 compute-0 nova_compute[189493]: 2025-12-09 10:55:45.134 189497 DEBUG nova.compute.manager [req-01bada05-27ef-4a0c-85d3-3f88381ed5c5 req-b2d3368f-9dd2-4c90-945e-5385778c9214 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] No waiting events found dispatching network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 09 10:55:45 compute-0 nova_compute[189493]: 2025-12-09 10:55:45.135 189497 WARNING nova.compute.manager [req-01bada05-27ef-4a0c-85d3-3f88381ed5c5 req-b2d3368f-9dd2-4c90-945e-5385778c9214 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received unexpected event network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 for instance with vm_state active and task_state None.
Dec 09 10:55:45 compute-0 podman[242941]: 2025-12-09 10:55:45.932333362 +0000 UTC m=+0.090196116 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 09 10:55:45 compute-0 podman[242942]: 2025-12-09 10:55:45.976160189 +0000 UTC m=+0.119265079 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 09 10:55:47 compute-0 nova_compute[189493]: 2025-12-09 10:55:47.250 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:48 compute-0 nova_compute[189493]: 2025-12-09 10:55:48.978 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:51 compute-0 podman[242975]: 2025-12-09 10:55:51.96110504 +0000 UTC m=+0.113594081 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, name=ubi9-minimal, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 09 10:55:52 compute-0 nova_compute[189493]: 2025-12-09 10:55:52.254 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:53 compute-0 nova_compute[189493]: 2025-12-09 10:55:53.981 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:54 compute-0 podman[242997]: 2025-12-09 10:55:54.012620643 +0000 UTC m=+0.163525008 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:55:56 compute-0 podman[243023]: 2025-12-09 10:55:56.927850557 +0000 UTC m=+0.078496139 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 10:55:57 compute-0 nova_compute[189493]: 2025-12-09 10:55:57.256 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:58 compute-0 nova_compute[189493]: 2025-12-09 10:55:58.986 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:55:59 compute-0 sshd-session[243047]: Invalid user dev from 159.223.8.217 port 47776
Dec 09 10:55:59 compute-0 sshd-session[243047]: Connection closed by invalid user dev 159.223.8.217 port 47776 [preauth]
Dec 09 10:55:59 compute-0 podman[203687]: time="2025-12-09T10:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:55:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:55:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4771 "" "Go-http-client/1.1"
Dec 09 10:56:01 compute-0 openstack_network_exporter[205823]: ERROR   10:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:56:01 compute-0 openstack_network_exporter[205823]: ERROR   10:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:56:01 compute-0 openstack_network_exporter[205823]: ERROR   10:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:56:01 compute-0 openstack_network_exporter[205823]: ERROR   10:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:56:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:56:01 compute-0 openstack_network_exporter[205823]: ERROR   10:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:56:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:56:02 compute-0 nova_compute[189493]: 2025-12-09 10:56:02.258 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:03 compute-0 nova_compute[189493]: 2025-12-09 10:56:03.993 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:05 compute-0 podman[243049]: 2025-12-09 10:56:05.971328443 +0000 UTC m=+0.117514411 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 09 10:56:07 compute-0 nova_compute[189493]: 2025-12-09 10:56:07.260 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:08 compute-0 podman[243067]: 2025-12-09 10:56:08.927104971 +0000 UTC m=+0.077827110 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 09 10:56:09 compute-0 nova_compute[189493]: 2025-12-09 10:56:08.999 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:10 compute-0 ovn_controller[97780]: 2025-12-09T10:56:10Z|00044|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec 09 10:56:12 compute-0 ovn_controller[97780]: 2025-12-09T10:56:12Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:83:9f:5d 192.168.0.98
Dec 09 10:56:12 compute-0 ovn_controller[97780]: 2025-12-09T10:56:12Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:83:9f:5d 192.168.0.98
Dec 09 10:56:12 compute-0 nova_compute[189493]: 2025-12-09 10:56:12.263 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:13 compute-0 podman[243106]: 2025-12-09 10:56:13.939589407 +0000 UTC m=+0.076641206 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 09 10:56:13 compute-0 podman[243105]: 2025-12-09 10:56:13.939854035 +0000 UTC m=+0.076661267 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, config_id=edpm)
Dec 09 10:56:14 compute-0 nova_compute[189493]: 2025-12-09 10:56:14.005 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:16 compute-0 podman[243144]: 2025-12-09 10:56:16.961481817 +0000 UTC m=+0.114810395 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 10:56:16 compute-0 podman[243143]: 2025-12-09 10:56:16.982855025 +0000 UTC m=+0.126101702 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 10:56:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:56:16.988 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:56:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:56:16.989 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:56:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:56:16.989 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:56:17 compute-0 nova_compute[189493]: 2025-12-09 10:56:17.266 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:19 compute-0 nova_compute[189493]: 2025-12-09 10:56:19.008 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:22 compute-0 nova_compute[189493]: 2025-12-09 10:56:22.269 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:22 compute-0 podman[243182]: 2025-12-09 10:56:22.958094024 +0000 UTC m=+0.100034381 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.291 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.292 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.307 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 32dd7fb0-7003-48cc-b688-4b94946c911f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 09 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.309 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/32dd7fb0-7003-48cc-b688-4b94946c911f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}c39d506960fbc5044d0bc54d9594567a78a3d14170701e46780a30eef7979125" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 09 10:56:24 compute-0 nova_compute[189493]: 2025-12-09 10:56:24.013 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.361 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Tue, 09 Dec 2025 10:56:23 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-caa63768-3dc6-4cde-91b7-11daee944012 x-openstack-request-id: req-caa63768-3dc6-4cde-91b7-11daee944012 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.362 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "32dd7fb0-7003-48cc-b688-4b94946c911f", "name": "vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y", "status": "ACTIVE", "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "user_id": "e6d3a937c2a74eb0816d9f63820935e0", "metadata": {"metering.server_group": "24f6e5b2-dd43-46f1-87a4-e2efc1300914"}, "hostId": "17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee", "image": {"id": "53d12211-5d5c-4333-b3ee-e3dcf1663767", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/53d12211-5d5c-4333-b3ee-e3dcf1663767"}]}, "flavor": {"id": "cf91b364-8467-4d1e-8c92-f7d1fab99905", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/cf91b364-8467-4d1e-8c92-f7d1fab99905"}]}, "created": "2025-12-09T10:55:31Z", "updated": "2025-12-09T10:55:41Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.98", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:83:9f:5d"}, {"version": 4, "addr": "192.168.122.244", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:83:9f:5d"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/32dd7fb0-7003-48cc-b688-4b94946c911f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/32dd7fb0-7003-48cc-b688-4b94946c911f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-09T10:55:41.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.363 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/32dd7fb0-7003-48cc-b688-4b94946c911f used request id req-caa63768-3dc6-4cde-91b7-11daee944012 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.366 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '32dd7fb0-7003-48cc-b688-4b94946c911f', 'name': 'vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.373 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1bddf2bf-8932-4428-97d7-7342a7ec414b', 'name': 'vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.380 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.380 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.381 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.381 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.381 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.383 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T10:56:24.381624) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.390 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 32dd7fb0-7003-48cc-b688-4b94946c911f / tapd6164edf-ad inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.390 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.400 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes volume: 4975 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.407 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2094 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.408 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.408 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.408 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T10:56:24.408868) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.449 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.450 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.451 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.483 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.484 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.485 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.516 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.516 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.517 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.517 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.518 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.518 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.518 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.518 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.518 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T10:56:24.518438) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.519 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets volume: 45 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.519 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.519 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.520 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.520 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.520 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.520 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.520 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T10:56:24.520447) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.520 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.521 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.521 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.521 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.522 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.522 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.522 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.522 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T10:56:24.522408) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.594 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.595 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.596 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.677 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.677 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.678 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.807 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.808 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.809 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.809 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.809 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.809 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.810 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.810 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.810 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.810 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.810 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T10:56:24.810076) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.810 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.811 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.811 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.811 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.811 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.812 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.812 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 386883662 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.812 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 91523197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.812 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 560654086 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.812 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T10:56:24.812010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.813 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 439593872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.813 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 92612690 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.813 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 59905939 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.813 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.813 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.813 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.814 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.814 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.814 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.815 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.815 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.815 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.815 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.815 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.815 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.815 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.816 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.816 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.816 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.816 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.816 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.817 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.817 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.817 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.817 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.817 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.817 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.817 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.818 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.818 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.818 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.818 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.818 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.819 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.819 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.819 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.820 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.820 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.820 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.820 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.820 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.820 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.820 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T10:56:24.815184) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.821 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T10:56:24.817733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.821 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T10:56:24.820461) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.842 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/cpu volume: 29770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.864 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/cpu volume: 309580000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.894 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 40010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.895 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.895 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.895 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.895 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.895 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.895 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.896 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.896 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.896 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.896 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.897 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.897 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.897 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.897 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.898 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.898 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.898 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.898 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.898 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T10:56:24.895684) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.898 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.898 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.899 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.899 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.899 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T10:56:24.898823) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.899 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.899 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.899 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.900 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.900 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.900 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.900 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.901 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.901 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.901 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.901 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.901 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.901 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.901 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 1654583151 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.902 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 9651641 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T10:56:24.901788) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.902 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.902 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 2118298266 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.902 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 13222286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.903 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.903 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.903 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.903 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.904 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.904 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.904 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.904 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.904 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.904 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.904 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.905 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.905 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.905 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.905 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.905 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.905 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.905 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.906 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T10:56:24.904658) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.906 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.bytes volume: 1751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.906 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes volume: 5004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.906 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.907 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.907 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T10:56:24.906149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.907 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.907 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.907 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.908 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.908 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.908 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.908 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T10:56:24.907670) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.909 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.909 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.909 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.909 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.910 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.910 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.910 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.910 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.910 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.910 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.910 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.911 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.911 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.911 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.911 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.911 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.911 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.912 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.912 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.912 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.912 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.913 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.913 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.913 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.913 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y>]
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.914 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.914 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.914 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.914 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.914 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T10:56:24.910589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.916 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T10:56:24.911736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.916 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-09T10:56:24.913181) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.916 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T10:56:24.914753) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.916 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.916 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.916 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.916 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.916 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.917 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.917 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.917 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets volume: 34 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.917 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.918 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T10:56:24.916988) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.918 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.918 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.918 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.918 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.918 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.918 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.919 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.919 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.920 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.920 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.920 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.920 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.920 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.920 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.921 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.921 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.921 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.921 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.921 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.921 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.921 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.922 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T10:56:24.918547) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T10:56:24.920349) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.922 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T10:56:24.921851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.922 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.922 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.923 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.923 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.923 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.923 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.923 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.923 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/memory.usage volume: 49.5390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.923 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/memory.usage volume: 49.16015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.924 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.924 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.924 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.924 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.924 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.924 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.924 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.925 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.925 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y>]
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.925 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T10:56:24.923379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.925 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-09T10:56:24.924862) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:56:24 compute-0 podman[243204]: 2025-12-09 10:56:24.978307372 +0000 UTC m=+0.132760058 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 09 10:56:27 compute-0 nova_compute[189493]: 2025-12-09 10:56:27.274 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:27 compute-0 nova_compute[189493]: 2025-12-09 10:56:27.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:56:27 compute-0 podman[243231]: 2025-12-09 10:56:27.963798351 +0000 UTC m=+0.100339150 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 10:56:28 compute-0 sshd-session[243229]: Invalid user dev from 159.223.8.217 port 37226
Dec 09 10:56:28 compute-0 sshd-session[243229]: Connection closed by invalid user dev 159.223.8.217 port 37226 [preauth]
Dec 09 10:56:29 compute-0 nova_compute[189493]: 2025-12-09 10:56:29.018 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:29 compute-0 podman[203687]: time="2025-12-09T10:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:56:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:56:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Dec 09 10:56:29 compute-0 nova_compute[189493]: 2025-12-09 10:56:29.845 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:56:30 compute-0 nova_compute[189493]: 2025-12-09 10:56:30.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:56:30 compute-0 nova_compute[189493]: 2025-12-09 10:56:30.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:56:31 compute-0 openstack_network_exporter[205823]: ERROR   10:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:56:31 compute-0 openstack_network_exporter[205823]: ERROR   10:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:56:31 compute-0 openstack_network_exporter[205823]: ERROR   10:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:56:31 compute-0 openstack_network_exporter[205823]: ERROR   10:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:56:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:56:31 compute-0 openstack_network_exporter[205823]: ERROR   10:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:56:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:56:31 compute-0 nova_compute[189493]: 2025-12-09 10:56:31.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:56:31 compute-0 nova_compute[189493]: 2025-12-09 10:56:31.846 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:56:31 compute-0 nova_compute[189493]: 2025-12-09 10:56:31.847 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 10:56:32 compute-0 nova_compute[189493]: 2025-12-09 10:56:32.276 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:32 compute-0 nova_compute[189493]: 2025-12-09 10:56:32.422 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:56:32 compute-0 nova_compute[189493]: 2025-12-09 10:56:32.423 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:56:32 compute-0 nova_compute[189493]: 2025-12-09 10:56:32.424 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 10:56:32 compute-0 nova_compute[189493]: 2025-12-09 10:56:32.425 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 10:56:34 compute-0 nova_compute[189493]: 2025-12-09 10:56:34.023 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.414 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.438 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.439 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.439 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.440 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.440 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.485 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.486 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.486 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.487 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.598 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:56:36 compute-0 podman[243254]: 2025-12-09 10:56:36.65514709 +0000 UTC m=+0.107813480 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.706 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.709 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.770 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.771 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.860 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.862 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.964 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.981 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.058 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.059 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.145 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.148 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.247 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.260 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.291 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.356 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.363 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.436 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.439 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.517 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.519 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.580 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.581 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.636 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.020 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.022 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4867MB free_disk=72.14188766479492GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.022 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.023 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.173 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.174 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.175 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 32dd7fb0-7003-48cc-b688-4b94946c911f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.175 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.176 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.264 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.277 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.307 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.307 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.285s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:56:39 compute-0 nova_compute[189493]: 2025-12-09 10:56:39.028 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:39 compute-0 nova_compute[189493]: 2025-12-09 10:56:39.709 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:56:39 compute-0 nova_compute[189493]: 2025-12-09 10:56:39.710 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:56:39 compute-0 podman[243311]: 2025-12-09 10:56:39.908451367 +0000 UTC m=+0.058960741 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 10:56:42 compute-0 nova_compute[189493]: 2025-12-09 10:56:42.282 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:44 compute-0 nova_compute[189493]: 2025-12-09 10:56:44.033 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:44 compute-0 podman[243335]: 2025-12-09 10:56:44.78469584 +0000 UTC m=+0.107219162 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, config_id=edpm, container_name=kepler, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., version=9.4, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30)
Dec 09 10:56:44 compute-0 podman[243336]: 2025-12-09 10:56:44.810783663 +0000 UTC m=+0.122063396 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec 09 10:56:47 compute-0 nova_compute[189493]: 2025-12-09 10:56:47.285 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:47 compute-0 podman[243374]: 2025-12-09 10:56:47.960036207 +0000 UTC m=+0.097203065 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 09 10:56:47 compute-0 podman[243373]: 2025-12-09 10:56:47.968695547 +0000 UTC m=+0.117365052 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 09 10:56:49 compute-0 nova_compute[189493]: 2025-12-09 10:56:49.036 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:52 compute-0 nova_compute[189493]: 2025-12-09 10:56:52.287 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:53 compute-0 podman[243412]: 2025-12-09 10:56:53.916436 +0000 UTC m=+0.072191500 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, distribution-scope=public, vendor=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git)
Dec 09 10:56:54 compute-0 nova_compute[189493]: 2025-12-09 10:56:54.040 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:55 compute-0 podman[243432]: 2025-12-09 10:56:55.949677974 +0000 UTC m=+0.103580465 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 09 10:56:57 compute-0 nova_compute[189493]: 2025-12-09 10:56:57.289 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:58 compute-0 sshd-session[243458]: Invalid user dev from 159.223.8.217 port 43666
Dec 09 10:56:58 compute-0 podman[243460]: 2025-12-09 10:56:58.343095864 +0000 UTC m=+0.064854025 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 10:56:58 compute-0 sshd-session[243458]: Connection closed by invalid user dev 159.223.8.217 port 43666 [preauth]
Dec 09 10:56:59 compute-0 nova_compute[189493]: 2025-12-09 10:56:59.043 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:56:59 compute-0 podman[203687]: time="2025-12-09T10:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:56:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:56:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4796 "" "Go-http-client/1.1"
Dec 09 10:57:01 compute-0 openstack_network_exporter[205823]: ERROR   10:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:57:01 compute-0 openstack_network_exporter[205823]: ERROR   10:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:57:01 compute-0 openstack_network_exporter[205823]: ERROR   10:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:57:01 compute-0 openstack_network_exporter[205823]: ERROR   10:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:57:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:57:01 compute-0 openstack_network_exporter[205823]: ERROR   10:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:57:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:57:02 compute-0 nova_compute[189493]: 2025-12-09 10:57:02.291 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:04 compute-0 nova_compute[189493]: 2025-12-09 10:57:04.048 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:06 compute-0 sshd-session[243484]: error: kex_exchange_identification: read: Connection reset by peer
Dec 09 10:57:06 compute-0 sshd-session[243484]: Connection reset by 45.140.17.97 port 57127
Dec 09 10:57:06 compute-0 podman[243485]: 2025-12-09 10:57:06.927154944 +0000 UTC m=+0.079727631 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 09 10:57:07 compute-0 nova_compute[189493]: 2025-12-09 10:57:07.293 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:09 compute-0 nova_compute[189493]: 2025-12-09 10:57:09.052 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:10 compute-0 podman[243504]: 2025-12-09 10:57:10.938176119 +0000 UTC m=+0.079120555 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 10:57:12 compute-0 nova_compute[189493]: 2025-12-09 10:57:12.296 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:14 compute-0 nova_compute[189493]: 2025-12-09 10:57:14.057 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:14 compute-0 podman[243527]: 2025-12-09 10:57:14.921341293 +0000 UTC m=+0.077803340 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, name=ubi9, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, version=9.4, com.redhat.component=ubi9-container, config_id=edpm)
Dec 09 10:57:14 compute-0 podman[243528]: 2025-12-09 10:57:14.938661783 +0000 UTC m=+0.092313365 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 09 10:57:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:16.989 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:57:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:16.990 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:57:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:16.990 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:57:17 compute-0 nova_compute[189493]: 2025-12-09 10:57:17.299 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:18 compute-0 podman[243565]: 2025-12-09 10:57:18.943042441 +0000 UTC m=+0.077689407 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image)
Dec 09 10:57:18 compute-0 podman[243564]: 2025-12-09 10:57:18.965372564 +0000 UTC m=+0.105533197 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 09 10:57:19 compute-0 nova_compute[189493]: 2025-12-09 10:57:19.062 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:22 compute-0 nova_compute[189493]: 2025-12-09 10:57:22.301 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:24 compute-0 nova_compute[189493]: 2025-12-09 10:57:24.068 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:24 compute-0 podman[243602]: 2025-12-09 10:57:24.942524069 +0000 UTC m=+0.089105560 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.33.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.)
Dec 09 10:57:26 compute-0 podman[243623]: 2025-12-09 10:57:26.94630223 +0000 UTC m=+0.098696155 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 09 10:57:27 compute-0 nova_compute[189493]: 2025-12-09 10:57:27.302 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:28 compute-0 sshd-session[243650]: Invalid user dev from 159.223.8.217 port 41242
Dec 09 10:57:28 compute-0 sshd-session[243650]: Connection closed by invalid user dev 159.223.8.217 port 41242 [preauth]
Dec 09 10:57:28 compute-0 nova_compute[189493]: 2025-12-09 10:57:28.796 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:28 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:28.793 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 10:57:28 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:28.794 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 09 10:57:28 compute-0 podman[243652]: 2025-12-09 10:57:28.919897739 +0000 UTC m=+0.073895976 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 10:57:29 compute-0 nova_compute[189493]: 2025-12-09 10:57:29.069 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:29 compute-0 podman[203687]: time="2025-12-09T10:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:57:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:57:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Dec 09 10:57:29 compute-0 nova_compute[189493]: 2025-12-09 10:57:29.838 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:57:29 compute-0 nova_compute[189493]: 2025-12-09 10:57:29.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:57:31 compute-0 openstack_network_exporter[205823]: ERROR   10:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:57:31 compute-0 openstack_network_exporter[205823]: ERROR   10:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:57:31 compute-0 openstack_network_exporter[205823]: ERROR   10:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:57:31 compute-0 openstack_network_exporter[205823]: ERROR   10:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:57:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:57:31 compute-0 openstack_network_exporter[205823]: ERROR   10:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:57:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:57:31 compute-0 nova_compute[189493]: 2025-12-09 10:57:31.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:57:31 compute-0 nova_compute[189493]: 2025-12-09 10:57:31.880 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:57:32 compute-0 nova_compute[189493]: 2025-12-09 10:57:32.305 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:32 compute-0 nova_compute[189493]: 2025-12-09 10:57:32.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:57:32 compute-0 nova_compute[189493]: 2025-12-09 10:57:32.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:57:33 compute-0 nova_compute[189493]: 2025-12-09 10:57:33.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:57:33 compute-0 nova_compute[189493]: 2025-12-09 10:57:33.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:57:34 compute-0 nova_compute[189493]: 2025-12-09 10:57:34.074 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:34 compute-0 nova_compute[189493]: 2025-12-09 10:57:34.394 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:57:34 compute-0 nova_compute[189493]: 2025-12-09 10:57:34.395 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:57:34 compute-0 nova_compute[189493]: 2025-12-09 10:57:34.395 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.235 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.238 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.274 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.389 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.390 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.406 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.407 189497 INFO nova.compute.claims [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Claim successful on node compute-0.ctlplane.example.com
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.631 189497 DEBUG nova.compute.provider_tree [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.646 189497 DEBUG nova.scheduler.client.report [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.661 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.675 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.285s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.677 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.685 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.686 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.688 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.689 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.728 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.729 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.730 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.731 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.763 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.764 189497 DEBUG nova.network.neutron [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.788 189497 INFO nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.844 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.875 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.961 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.963 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.034 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.036 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.070 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.074 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.076 189497 INFO nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Creating image(s)
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.078 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.079 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.081 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.110 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.133 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.135 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.167 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.169 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.170 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.186 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.217 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.227 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.244 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.246 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798,backing_fmt=raw /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.284 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.286 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.310 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798,backing_fmt=raw /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk 1073741824" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.312 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.312 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.382 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.385 189497 DEBUG nova.virt.disk.api [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Checking if we can resize image /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.386 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.408 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.411 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.456 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.459 189497 DEBUG nova.virt.disk.api [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Cannot resize image /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.460 189497 DEBUG nova.objects.instance [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'migration_context' on Instance uuid 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.486 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.488 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.490 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.521 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.521 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.541 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.589 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.591 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.592 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.609 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.633 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.642 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.685 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.688 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.724 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.726 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.760 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 1073741824" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.762 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.763 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.803 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.804 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.845 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.847 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.847 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Ensure instance console log exists: /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.848 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.849 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.850 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.863 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.864 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.898 189497 DEBUG nova.network.neutron [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Successfully updated port: b903bb84-e176-4730-b223-613a9b01712b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.914 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.915 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquired lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.916 189497 DEBUG nova.network.neutron [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.943 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.990 189497 DEBUG nova.compute.manager [req-391b6854-44ae-4c9d-9195-d6392c4418b7 req-29886022-1ad6-43ef-88d0-222c80e47024 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received event network-changed-b903bb84-e176-4730-b223-613a9b01712b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.991 189497 DEBUG nova.compute.manager [req-391b6854-44ae-4c9d-9195-d6392c4418b7 req-29886022-1ad6-43ef-88d0-222c80e47024 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Refreshing instance network info cache due to event network-changed-b903bb84-e176-4730-b223-613a9b01712b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 09 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.992 189497 DEBUG oslo_concurrency.lockutils [req-391b6854-44ae-4c9d-9195-d6392c4418b7 req-29886022-1ad6-43ef-88d0-222c80e47024 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.039 189497 DEBUG nova.network.neutron [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.306 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.464 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.465 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4860MB free_disk=72.14183044433594GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.466 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.467 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.547 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.548 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.548 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 32dd7fb0-7003-48cc-b688-4b94946c911f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.549 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.550 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.551 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.658 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.672 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.698 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.699 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.782 189497 DEBUG nova.network.neutron [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updating instance_info_cache with network_info: [{"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:57:37 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:37.796 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.805 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Releasing lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.806 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Instance network_info: |[{"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.808 189497 DEBUG oslo_concurrency.lockutils [req-391b6854-44ae-4c9d-9195-d6392c4418b7 req-29886022-1ad6-43ef-88d0-222c80e47024 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquired lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.809 189497 DEBUG nova.network.neutron [req-391b6854-44ae-4c9d-9195-d6392c4418b7 req-29886022-1ad6-43ef-88d0-222c80e47024 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Refreshing network info cache for port b903bb84-e176-4730-b223-613a9b01712b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.816 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Start _get_guest_xml network_info=[{"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T10:47:15Z,direct_url=<?>,disk_format='qcow2',id=53d12211-5d5c-4333-b3ee-e3dcf1663767,min_disk=0,min_ram=0,name='cirros',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T10:47:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'size': 0, 'image_id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}], 'ephemerals': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'device_type': 'disk', 'guest_format': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.830 189497 WARNING nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.851 189497 DEBUG nova.virt.libvirt.host [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.855 189497 DEBUG nova.virt.libvirt.host [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.862 189497 DEBUG nova.virt.libvirt.host [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.862 189497 DEBUG nova.virt.libvirt.host [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.863 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.864 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-09T10:47:21Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='cf91b364-8467-4d1e-8c92-f7d1fab99905',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T10:47:15Z,direct_url=<?>,disk_format='qcow2',id=53d12211-5d5c-4333-b3ee-e3dcf1663767,min_disk=0,min_ram=0,name='cirros',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T10:47:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.864 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.865 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.865 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.868 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.869 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.870 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.872 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.873 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.874 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.875 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.883 189497 DEBUG nova.virt.libvirt.vif [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-09T10:57:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq',id=4,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-d2fjtx7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-09T10:57:35Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTEwNDAxNzQ2NjMwMjU3NjgzNjI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTA0MDE3NDY2MzAyNTc2ODM2Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTEwNDAxNzQ2NjMwMjU3NjgzNjI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Dec 09 10:57:37 compute-0 nova_compute[189493]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTA0MDE3NDY2MzAyNTc2ODM2Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTEwNDAxNzQ2NjMwMjU3NjgzNjI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0tLQo=',user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=7b43ca09-ed65-4465-9fcc-95caa6dc9a88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.884 189497 DEBUG nova.network.os_vif_util [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.885 189497 DEBUG nova.network.os_vif_util [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:d3:f4,bridge_name='br-int',has_traffic_filtering=True,id=b903bb84-e176-4730-b223-613a9b01712b,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb903bb84-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.887 189497 DEBUG nova.objects.instance [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.900 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] End _get_guest_xml xml=<domain type="kvm">
Dec 09 10:57:37 compute-0 nova_compute[189493]:   <uuid>7b43ca09-ed65-4465-9fcc-95caa6dc9a88</uuid>
Dec 09 10:57:37 compute-0 nova_compute[189493]:   <name>instance-00000004</name>
Dec 09 10:57:37 compute-0 nova_compute[189493]:   <memory>524288</memory>
Dec 09 10:57:37 compute-0 nova_compute[189493]:   <vcpu>1</vcpu>
Dec 09 10:57:37 compute-0 nova_compute[189493]:   <metadata>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <nova:name>vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq</nova:name>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <nova:creationTime>2025-12-09 10:57:37</nova:creationTime>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <nova:flavor name="m1.small">
Dec 09 10:57:37 compute-0 nova_compute[189493]:         <nova:memory>512</nova:memory>
Dec 09 10:57:37 compute-0 nova_compute[189493]:         <nova:disk>1</nova:disk>
Dec 09 10:57:37 compute-0 nova_compute[189493]:         <nova:swap>0</nova:swap>
Dec 09 10:57:37 compute-0 nova_compute[189493]:         <nova:ephemeral>1</nova:ephemeral>
Dec 09 10:57:37 compute-0 nova_compute[189493]:         <nova:vcpus>1</nova:vcpus>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       </nova:flavor>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <nova:owner>
Dec 09 10:57:37 compute-0 nova_compute[189493]:         <nova:user uuid="e6d3a937c2a74eb0816d9f63820935e0">admin</nova:user>
Dec 09 10:57:37 compute-0 nova_compute[189493]:         <nova:project uuid="736bbfddbeea47e3ac9d863ba120b8f2">admin</nova:project>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       </nova:owner>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <nova:root type="image" uuid="53d12211-5d5c-4333-b3ee-e3dcf1663767"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <nova:ports>
Dec 09 10:57:37 compute-0 nova_compute[189493]:         <nova:port uuid="b903bb84-e176-4730-b223-613a9b01712b">
Dec 09 10:57:37 compute-0 nova_compute[189493]:           <nova:ip type="fixed" address="192.168.0.92" ipVersion="4"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:         </nova:port>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       </nova:ports>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     </nova:instance>
Dec 09 10:57:37 compute-0 nova_compute[189493]:   </metadata>
Dec 09 10:57:37 compute-0 nova_compute[189493]:   <sysinfo type="smbios">
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <system>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <entry name="manufacturer">RDO</entry>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <entry name="product">OpenStack Compute</entry>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <entry name="serial">7b43ca09-ed65-4465-9fcc-95caa6dc9a88</entry>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <entry name="uuid">7b43ca09-ed65-4465-9fcc-95caa6dc9a88</entry>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <entry name="family">Virtual Machine</entry>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     </system>
Dec 09 10:57:37 compute-0 nova_compute[189493]:   </sysinfo>
Dec 09 10:57:37 compute-0 nova_compute[189493]:   <os>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <boot dev="hd"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <smbios mode="sysinfo"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:   </os>
Dec 09 10:57:37 compute-0 nova_compute[189493]:   <features>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <acpi/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <apic/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <vmcoreinfo/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:   </features>
Dec 09 10:57:37 compute-0 nova_compute[189493]:   <clock offset="utc">
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <timer name="pit" tickpolicy="delay"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <timer name="hpet" present="no"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:   </clock>
Dec 09 10:57:37 compute-0 nova_compute[189493]:   <cpu mode="host-model" match="exact">
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <topology sockets="1" cores="1" threads="1"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:   </cpu>
Dec 09 10:57:37 compute-0 nova_compute[189493]:   <devices>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <disk type="file" device="disk">
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <source file="/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <target dev="vda" bus="virtio"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     </disk>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <disk type="file" device="disk">
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <source file="/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <target dev="vdb" bus="virtio"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     </disk>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <disk type="file" device="cdrom">
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <driver name="qemu" type="raw" cache="none"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <source file="/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.config"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <target dev="sda" bus="sata"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     </disk>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <interface type="ethernet">
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <mac address="fa:16:3e:91:d3:f4"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <model type="virtio"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <driver name="vhost" rx_queue_size="512"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <mtu size="1442"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <target dev="tapb903bb84-e1"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     </interface>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <serial type="pty">
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <log file="/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/console.log" append="off"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     </serial>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <video>
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <model type="virtio"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     </video>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <input type="tablet" bus="usb"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <rng model="virtio">
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <backend model="random">/dev/urandom</backend>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     </rng>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <controller type="usb" index="0"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     <memballoon model="virtio">
Dec 09 10:57:37 compute-0 nova_compute[189493]:       <stats period="10"/>
Dec 09 10:57:37 compute-0 nova_compute[189493]:     </memballoon>
Dec 09 10:57:37 compute-0 nova_compute[189493]:   </devices>
Dec 09 10:57:37 compute-0 nova_compute[189493]: </domain>
Dec 09 10:57:37 compute-0 nova_compute[189493]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.914 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Preparing to wait for external event network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.915 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.915 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.915 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.916 189497 DEBUG nova.virt.libvirt.vif [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-09T10:57:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq',id=4,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-d2fjtx7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-09T10:57:35Z,user_data='Content-Type: multipart/mixed; boundary="===============1040174663025768362=="
MIME-Version: 1.0

--===============1040174663025768362==
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config"



# Capture all subprocess output into a logfile
# Useful for troubleshooting cloud-init issues
output: {all: '| tee -a /var/log/cloud-init-output.log'}

--===============1040174663025768362==
Content-Type: text/cloud-boothook; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="boothook.sh"

#!/usr/bin/bash

# FIXME(shadower) this is a workaround for cloud-init 0.6.3 present in Ubuntu
# 12.04 LTS:
# https://bugs.launchpad.net/heat/+bug/1257410
#
# The old cloud-init doesn't create the users directly so the commands to do
# this are injected though nova_utils.py.
#
# Once we drop support for 0.6.3, we can safely remove this.


# in case heat-cfntools has been installed from package but no symlinks
# are yet in /opt/aws/bin/
cfn-create-aws-symlinks

# Do not remove - the cloud boothook should always return success
exit 0

--===============1040174663025768362==
Content-Type: text/part-handler; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="part-handler.py"

# part-handler
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import os
import sys


def list_types():
    return ["text/x-cfninitdata"]


def handle_part(data, ctype, filename, payload):
    if ctype == "__begin__":
        try:
            os.makedirs('/var/lib/heat-cfntools', int("700", 8))
        except OSError:
            ex_type, e, tb = sys.exc_info()
            if e.errno != errno.EEXIST:
                raise
        return

    if ctype == "__end__":
        return

    timestamp = datetime.datetime.now()
    with open('/var/log/part-handler.log', 'a') as log:
        log.write('%s filename:%s, ctype:%s\n' % (timestamp, filename, ctype))

    if ctype == 'text/x-cfninitdata':
        with open('/var/lib/heat-cfntools/%s' % filename, 'w') as f:
            f.write(payload)

        # TODO(sdake) hopefully temporary until users move to heat-cfntools-1.3
        with open('/var/lib/cloud/data/%s' % filename, 'w') as f:
            f.write(payload)

--===============1040174663025768362==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-userdata"


--===============1040174663025768362==
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="loguserdata.py"

#!/usr/bin/env python3
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import logging
import os
import subprocess
import sys


VAR_PATH = '/var/lib/heat-cfntools'
LOG = logging.getLogger('heat-provision')


def init_logging():
    LOG.setLevel(logging.INFO)
    LOG.addHandler(logging.StreamHandler())
    fh = logging.FileHandler("/var/log/heat-provision.log")
    os.chmod(fh.baseFilename, int("600", 8))
    LOG.addHandler(fh)


def call(args):

    class LogStream(object):

        def write(self, data):
            LOG.info(data)

    LOG.info('%s\n', ' '.join(args))  # noqa
    try:
        ls = LogStream()
        p = subprocess.Popen(args, stdout=subprocess.PIPE,
                             stderr=subprocess.PIPE)
        data = p.communicate()
        if data:
            for x in data:
                ls.write(x)
    except OSError:
        ex_type, ex, tb = sys.exc_info()
        if ex.errno == errno.ENOEXEC:
            LOG.error('Userdata empty or not executable: %s', ex)
            return os.EX_OK
        else:
            LOG.error('OS error running userdata: %s', ex)
            return os.EX_OSERR
    except Exception:
        ex_type, ex, tb = sys.exc_info()
        LOG.error('Unknown error running userdata: %s', ex)
        return os.EX_SOFTWARE
    return p.returncode


def main():
    userdata_path = os.path.join(VAR_PATH, 'cfn-userdata')
    os.chmod(userdata_path, int("700", 8))

    LOG.info('Provision began: %s', datetime.datetime.now())
    returncode = call([userdata_path])
    LOG.info('Provision done: %s', datetime.datetime.now())
    if returncode:
        return returncode


if __name__ == '__main__':
    init_logging()

    code = main()
    if code:
        LOG.error('Provision failed with exit code %s', code)
        sys.exit(code)

    provision_log = os.path.join(VAR_PATH, 'provision-finished')
    # touch the file so it is timestamped with when finished
    with open(provision_log, 'a'):
        os.utime(provision_log, None)

--===============1040174663025768362==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-metadata-server"

https://heat-cfnapi-internal.openstack.svc:8000/v1/
--===============1040174663025768362==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-boto-cfg"

[Boto]
debug = 0
is_secure = 0
https_validate_certificates = 1
cfn_region_name = heat
cfn_region_endpoint = heat-cfnapi-internal.openstack.svc
--===============1040174663025768362==--
',user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=7b43ca09-ed65-4465-9fcc-95caa6dc9a88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.916 189497 DEBUG nova.network.os_vif_util [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.917 189497 DEBUG nova.network.os_vif_util [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:d3:f4,bridge_name='br-int',has_traffic_filtering=True,id=b903bb84-e176-4730-b223-613a9b01712b,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb903bb84-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.918 189497 DEBUG os_vif [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:d3:f4,bridge_name='br-int',has_traffic_filtering=True,id=b903bb84-e176-4730-b223-613a9b01712b,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb903bb84-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.919 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.919 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.920 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.930 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.931 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb903bb84-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.932 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb903bb84-e1, col_values=(('external_ids', {'iface-id': 'b903bb84-e176-4730-b223-613a9b01712b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:d3:f4', 'vm-uuid': '7b43ca09-ed65-4465-9fcc-95caa6dc9a88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:57:37 compute-0 NetworkManager[56302]: <info>  [1765277857.9369] manager: (tapb903bb84-e1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.937 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:37 compute-0 rsyslogd[236818]: message too long (8192) with configured size 8096, begin of message is: 2025-12-09 10:57:37.883 189497 DEBUG nova.virt.libvirt.vif [None req-3a588a98-b0 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.941 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.948 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.950 189497 INFO os_vif [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:d3:f4,bridge_name='br-int',has_traffic_filtering=True,id=b903bb84-e176-4730-b223-613a9b01712b,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb903bb84-e1')
Dec 09 10:57:37 compute-0 podman[243740]: 2025-12-09 10:57:37.986476737 +0000 UTC m=+0.125543359 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 10:57:38 compute-0 nova_compute[189493]: 2025-12-09 10:57:38.007 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 09 10:57:38 compute-0 nova_compute[189493]: 2025-12-09 10:57:38.007 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 09 10:57:38 compute-0 nova_compute[189493]: 2025-12-09 10:57:38.007 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 09 10:57:38 compute-0 nova_compute[189493]: 2025-12-09 10:57:38.008 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No VIF found with MAC fa:16:3e:91:d3:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 09 10:57:38 compute-0 nova_compute[189493]: 2025-12-09 10:57:38.008 189497 INFO nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Using config drive
Dec 09 10:57:38 compute-0 nova_compute[189493]: 2025-12-09 10:57:38.912 189497 INFO nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Creating config drive at /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.config
Dec 09 10:57:38 compute-0 nova_compute[189493]: 2025-12-09 10:57:38.920 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaadafgmu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.068 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaadafgmu" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:57:39 compute-0 kernel: tapb903bb84-e1: entered promiscuous mode
Dec 09 10:57:39 compute-0 NetworkManager[56302]: <info>  [1765277859.2110] manager: (tapb903bb84-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec 09 10:57:39 compute-0 ovn_controller[97780]: 2025-12-09T10:57:39Z|00045|binding|INFO|Claiming lport b903bb84-e176-4730-b223-613a9b01712b for this chassis.
Dec 09 10:57:39 compute-0 ovn_controller[97780]: 2025-12-09T10:57:39Z|00046|binding|INFO|b903bb84-e176-4730-b223-613a9b01712b: Claiming fa:16:3e:91:d3:f4 192.168.0.92
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.218 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:39 compute-0 ovn_controller[97780]: 2025-12-09T10:57:39Z|00047|binding|INFO|Setting lport b903bb84-e176-4730-b223-613a9b01712b ovn-installed in OVS
Dec 09 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.242 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:d3:f4 192.168.0.92'], port_security=['fa:16:3e:91:d3:f4 192.168.0.92'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5eiooafn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-port-rb2sbixhbgrm', 'neutron:cidrs': '192.168.0.92/24', 'neutron:device_id': '7b43ca09-ed65-4465-9fcc-95caa6dc9a88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5af7354-5afe-400a-9e13-5500648117d8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5eiooafn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-port-rb2sbixhbgrm', 'neutron:project_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd86dfae4-cfd5-480d-a50e-0084326b1439', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61df917c-633f-4b35-857d-39fd859caf35, chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], logical_port=b903bb84-e176-4730-b223-613a9b01712b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.244 106644 INFO neutron.agent.ovn.metadata.agent [-] Port b903bb84-e176-4730-b223-613a9b01712b in datapath c5af7354-5afe-400a-9e13-5500648117d8 bound to our chassis
Dec 09 10:57:39 compute-0 ovn_controller[97780]: 2025-12-09T10:57:39Z|00048|binding|INFO|Setting lport b903bb84-e176-4730-b223-613a9b01712b up in Southbound
Dec 09 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.248 106644 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5af7354-5afe-400a-9e13-5500648117d8
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.254 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:39 compute-0 systemd-machined[155790]: New machine qemu-4-instance-00000004.
Dec 09 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.276 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[0ffcae58-3a1c-4cb4-a3f9-89c87ff7efdd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:57:39 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Dec 09 10:57:39 compute-0 systemd-udevd[243785]: Network interface NamePolicy= disabled on kernel command line.
Dec 09 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.321 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[54ebd18d-47c5-412b-9170-64cdbfec9742]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.325 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[759074cb-335f-4631-a49e-cfecf455c9ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:57:39 compute-0 NetworkManager[56302]: <info>  [1765277859.3342] device (tapb903bb84-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 09 10:57:39 compute-0 NetworkManager[56302]: <info>  [1765277859.3400] device (tapb903bb84-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 09 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.361 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[f4121cb2-1723-42e3-adba-018ae09f6519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.384 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[7a425e70-655f-453d-8b94-113675a63406]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5af7354-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:0d:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396027, 'reachable_time': 18085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243793, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.403 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec79a0b-c729-4b01-8619-175c3e0c8457]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396043, 'tstamp': 396043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243795, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396046, 'tstamp': 396046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243795, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.405 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5af7354-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.407 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.409 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5af7354-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.408 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.409 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 09 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.409 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5af7354-50, col_values=(('external_ids', {'iface-id': '3eb47070-bc26-4827-a5a8-68152f05129c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.410 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.465 189497 DEBUG nova.compute.manager [req-893807c6-76fe-4916-ae24-6488a2e08053 req-50727518-d0a3-41ab-927a-dd58f5ac43dc 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received event network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.468 189497 DEBUG oslo_concurrency.lockutils [req-893807c6-76fe-4916-ae24-6488a2e08053 req-50727518-d0a3-41ab-927a-dd58f5ac43dc 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.470 189497 DEBUG oslo_concurrency.lockutils [req-893807c6-76fe-4916-ae24-6488a2e08053 req-50727518-d0a3-41ab-927a-dd58f5ac43dc 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.471 189497 DEBUG oslo_concurrency.lockutils [req-893807c6-76fe-4916-ae24-6488a2e08053 req-50727518-d0a3-41ab-927a-dd58f5ac43dc 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.473 189497 DEBUG nova.compute.manager [req-893807c6-76fe-4916-ae24-6488a2e08053 req-50727518-d0a3-41ab-927a-dd58f5ac43dc 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Processing event network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.659 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277859.6588347, 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.660 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] VM Started (Lifecycle Event)
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.664 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.672 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.679 189497 INFO nova.virt.libvirt.driver [-] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Instance spawned successfully.
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.679 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.686 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.692 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.723 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.725 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.727 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.729 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.730 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.732 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.736 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.737 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277859.6591125, 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.738 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] VM Paused (Lifecycle Event)
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.808 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.815 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277859.6729915, 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.816 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] VM Resumed (Lifecycle Event)
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.831 189497 INFO nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Took 3.76 seconds to spawn the instance on the hypervisor.
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.832 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.851 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.862 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.917 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.933 189497 INFO nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Took 4.58 seconds to build instance.
Dec 09 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.952 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:57:40 compute-0 nova_compute[189493]: 2025-12-09 10:57:40.007 189497 DEBUG nova.network.neutron [req-391b6854-44ae-4c9d-9195-d6392c4418b7 req-29886022-1ad6-43ef-88d0-222c80e47024 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updated VIF entry in instance network info cache for port b903bb84-e176-4730-b223-613a9b01712b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 09 10:57:40 compute-0 nova_compute[189493]: 2025-12-09 10:57:40.009 189497 DEBUG nova.network.neutron [req-391b6854-44ae-4c9d-9195-d6392c4418b7 req-29886022-1ad6-43ef-88d0-222c80e47024 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updating instance_info_cache with network_info: [{"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:57:40 compute-0 nova_compute[189493]: 2025-12-09 10:57:40.032 189497 DEBUG oslo_concurrency.lockutils [req-391b6854-44ae-4c9d-9195-d6392c4418b7 req-29886022-1ad6-43ef-88d0-222c80e47024 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Releasing lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:57:41 compute-0 nova_compute[189493]: 2025-12-09 10:57:41.852 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:57:41 compute-0 nova_compute[189493]: 2025-12-09 10:57:41.853 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:57:41 compute-0 podman[243803]: 2025-12-09 10:57:41.979957985 +0000 UTC m=+0.110691554 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 09 10:57:42 compute-0 nova_compute[189493]: 2025-12-09 10:57:42.310 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:42 compute-0 nova_compute[189493]: 2025-12-09 10:57:42.527 189497 DEBUG nova.compute.manager [req-de3688bc-ca57-4ec2-8060-cd284db802a8 req-d9e3ad28-b4ea-496c-a305-f20fbcf230eb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received event network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 10:57:42 compute-0 nova_compute[189493]: 2025-12-09 10:57:42.528 189497 DEBUG oslo_concurrency.lockutils [req-de3688bc-ca57-4ec2-8060-cd284db802a8 req-d9e3ad28-b4ea-496c-a305-f20fbcf230eb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:57:42 compute-0 nova_compute[189493]: 2025-12-09 10:57:42.529 189497 DEBUG oslo_concurrency.lockutils [req-de3688bc-ca57-4ec2-8060-cd284db802a8 req-d9e3ad28-b4ea-496c-a305-f20fbcf230eb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:57:42 compute-0 nova_compute[189493]: 2025-12-09 10:57:42.529 189497 DEBUG oslo_concurrency.lockutils [req-de3688bc-ca57-4ec2-8060-cd284db802a8 req-d9e3ad28-b4ea-496c-a305-f20fbcf230eb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:57:42 compute-0 nova_compute[189493]: 2025-12-09 10:57:42.530 189497 DEBUG nova.compute.manager [req-de3688bc-ca57-4ec2-8060-cd284db802a8 req-d9e3ad28-b4ea-496c-a305-f20fbcf230eb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] No waiting events found dispatching network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 09 10:57:42 compute-0 nova_compute[189493]: 2025-12-09 10:57:42.531 189497 WARNING nova.compute.manager [req-de3688bc-ca57-4ec2-8060-cd284db802a8 req-d9e3ad28-b4ea-496c-a305-f20fbcf230eb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received unexpected event network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b for instance with vm_state active and task_state None.
Dec 09 10:57:42 compute-0 nova_compute[189493]: 2025-12-09 10:57:42.937 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:45 compute-0 podman[243827]: 2025-12-09 10:57:45.935335281 +0000 UTC m=+0.088044473 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, distribution-scope=public, name=ubi9, release=1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 09 10:57:45 compute-0 podman[243828]: 2025-12-09 10:57:45.958704842 +0000 UTC m=+0.096874247 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible)
Dec 09 10:57:47 compute-0 nova_compute[189493]: 2025-12-09 10:57:47.313 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:47 compute-0 nova_compute[189493]: 2025-12-09 10:57:47.940 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:49 compute-0 podman[243868]: 2025-12-09 10:57:49.985002532 +0000 UTC m=+0.116715434 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 09 10:57:50 compute-0 podman[243867]: 2025-12-09 10:57:50.010614353 +0000 UTC m=+0.146268100 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec 09 10:57:52 compute-0 nova_compute[189493]: 2025-12-09 10:57:52.315 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:52 compute-0 nova_compute[189493]: 2025-12-09 10:57:52.942 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:55 compute-0 podman[243904]: 2025-12-09 10:57:55.931946684 +0000 UTC m=+0.086102381 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64)
Dec 09 10:57:57 compute-0 nova_compute[189493]: 2025-12-09 10:57:57.317 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:57 compute-0 sshd-session[243924]: Invalid user dev from 159.223.8.217 port 38988
Dec 09 10:57:57 compute-0 podman[243926]: 2025-12-09 10:57:57.868355523 +0000 UTC m=+0.125541518 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 09 10:57:57 compute-0 sshd-session[243924]: Connection closed by invalid user dev 159.223.8.217 port 38988 [preauth]
Dec 09 10:57:57 compute-0 nova_compute[189493]: 2025-12-09 10:57:57.944 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:57:59 compute-0 podman[203687]: time="2025-12-09T10:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:57:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:57:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4799 "" "Go-http-client/1.1"
Dec 09 10:57:59 compute-0 podman[243955]: 2025-12-09 10:57:59.935573341 +0000 UTC m=+0.078941949 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 10:58:01 compute-0 openstack_network_exporter[205823]: ERROR   10:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:58:01 compute-0 openstack_network_exporter[205823]: ERROR   10:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:58:01 compute-0 openstack_network_exporter[205823]: ERROR   10:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:58:01 compute-0 openstack_network_exporter[205823]: ERROR   10:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:58:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:58:01 compute-0 openstack_network_exporter[205823]: ERROR   10:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:58:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:58:02 compute-0 nova_compute[189493]: 2025-12-09 10:58:02.319 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:02 compute-0 nova_compute[189493]: 2025-12-09 10:58:02.948 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:07 compute-0 nova_compute[189493]: 2025-12-09 10:58:07.322 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:07 compute-0 nova_compute[189493]: 2025-12-09 10:58:07.952 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:09 compute-0 podman[243977]: 2025-12-09 10:58:09.014311283 +0000 UTC m=+0.149262649 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 09 10:58:09 compute-0 ovn_controller[97780]: 2025-12-09T10:58:09Z|00049|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Dec 09 10:58:12 compute-0 nova_compute[189493]: 2025-12-09 10:58:12.324 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:12 compute-0 nova_compute[189493]: 2025-12-09 10:58:12.954 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:12 compute-0 ovn_controller[97780]: 2025-12-09T10:58:12Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:91:d3:f4 192.168.0.92
Dec 09 10:58:12 compute-0 podman[244006]: 2025-12-09 10:58:12.986989558 +0000 UTC m=+0.125540579 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 09 10:58:12 compute-0 ovn_controller[97780]: 2025-12-09T10:58:12Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:91:d3:f4 192.168.0.92
Dec 09 10:58:16 compute-0 podman[244030]: 2025-12-09 10:58:16.978481033 +0000 UTC m=+0.122757485 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, distribution-scope=public, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, version=9.4, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=kepler, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=)
Dec 09 10:58:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:58:16.990 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:58:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:58:16.990 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:58:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:58:16.991 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:58:17 compute-0 podman[244031]: 2025-12-09 10:58:17.031832711 +0000 UTC m=+0.165809089 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Dec 09 10:58:17 compute-0 nova_compute[189493]: 2025-12-09 10:58:17.326 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:17 compute-0 nova_compute[189493]: 2025-12-09 10:58:17.957 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:20 compute-0 podman[244070]: 2025-12-09 10:58:20.925571008 +0000 UTC m=+0.071747159 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 09 10:58:20 compute-0 podman[244071]: 2025-12-09 10:58:20.990461603 +0000 UTC m=+0.127842010 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 10:58:22 compute-0 nova_compute[189493]: 2025-12-09 10:58:22.329 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:22 compute-0 nova_compute[189493]: 2025-12-09 10:58:22.961 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.292 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.294 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.309 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '32dd7fb0-7003-48cc-b688-4b94946c911f', 'name': 'vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.315 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1bddf2bf-8932-4428-97d7-7342a7ec414b', 'name': 'vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.319 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec 09 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.322 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/7b43ca09-ed65-4465-9fcc-95caa6dc9a88 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}c39d506960fbc5044d0bc54d9594567a78a3d14170701e46780a30eef7979125" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.552 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Tue, 09 Dec 2025 10:58:23 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-7693d82e-f672-4678-a1b8-dd79f86fe1ad x-openstack-request-id: req-7693d82e-f672-4678-a1b8-dd79f86fe1ad _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.553 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "7b43ca09-ed65-4465-9fcc-95caa6dc9a88", "name": "vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq", "status": "ACTIVE", "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "user_id": "e6d3a937c2a74eb0816d9f63820935e0", "metadata": {"metering.server_group": "24f6e5b2-dd43-46f1-87a4-e2efc1300914"}, "hostId": "17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee", "image": {"id": "53d12211-5d5c-4333-b3ee-e3dcf1663767", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/53d12211-5d5c-4333-b3ee-e3dcf1663767"}]}, "flavor": {"id": "cf91b364-8467-4d1e-8c92-f7d1fab99905", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/cf91b364-8467-4d1e-8c92-f7d1fab99905"}]}, "created": "2025-12-09T10:57:33Z", "updated": "2025-12-09T10:57:39Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.92", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:91:d3:f4"}, {"version": 4, "addr": "192.168.122.176", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:91:d3:f4"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/7b43ca09-ed65-4465-9fcc-95caa6dc9a88"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/7b43ca09-ed65-4465-9fcc-95caa6dc9a88"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-09T10:57:39.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.553 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/7b43ca09-ed65-4465-9fcc-95caa6dc9a88 used request id req-7693d82e-f672-4678-a1b8-dd79f86fe1ad request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.555 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7b43ca09-ed65-4465-9fcc-95caa6dc9a88', 'name': 'vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.561 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.561 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.562 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.562 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.562 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.564 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T10:58:24.562530) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.568 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.576 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes volume: 8406 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.580 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 / tapb903bb84-e1 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.581 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.587 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.589 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T10:58:24.589705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.630 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.631 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.631 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.666 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.666 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.667 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.700 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.701 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.701 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.732 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.733 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.733 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.734 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.734 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.735 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.735 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.735 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.735 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.735 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets volume: 65 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.736 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T10:58:24.735354) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.737 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.738 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.738 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.738 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.738 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.738 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.739 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.739 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.739 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.740 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.740 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.740 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.741 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T10:58:24.738404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.741 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.741 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.741 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.742 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T10:58:24.741541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.821 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.822 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.822 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.887 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.887 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.887 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.963 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.964 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.965 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.034 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.034 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.034 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.035 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.035 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.035 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.035 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.035 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.036 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T10:58:25.035807) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.036 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.036 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.037 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.037 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.037 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.037 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.037 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 386883662 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.037 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 91523197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.038 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 560654086 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T10:58:25.037389) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.038 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 439593872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.038 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 92612690 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.038 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 59905939 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.039 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 492966519 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.039 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 88653492 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.039 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 59040938 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.039 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.040 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.040 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.040 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.041 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.041 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.041 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.041 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T10:58:25.041230) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.041 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.041 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.042 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.042 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.042 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.042 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.042 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.043 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.043 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.043 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.043 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.044 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.044 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.044 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.045 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.045 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.045 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.045 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.045 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.046 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T10:58:25.044675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.046 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.046 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.047 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.047 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.047 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.048 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.048 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.048 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.048 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.048 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T10:58:25.048577) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.071 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/cpu volume: 31480000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.096 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/cpu volume: 388560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.126 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/cpu volume: 32640000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.152 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 41680000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.153 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.153 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.153 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.153 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.153 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.153 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.153 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.154 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.154 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.154 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T10:58:25.153543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.154 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.154 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.154 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.155 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.155 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.155 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.155 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.155 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.156 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.156 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.156 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.157 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.157 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.157 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.157 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T10:58:25.156963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.157 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.157 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.157 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.158 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.158 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.158 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.158 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.158 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.159 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.159 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.159 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.159 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.160 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.160 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.160 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.160 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.160 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 1670377851 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.160 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 9651641 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T10:58:25.160260) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.160 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.161 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 2122486229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.161 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 13222286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.161 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.161 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 2203978842 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.161 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 10632793 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.162 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.162 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.162 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.162 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.163 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.163 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.163 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.163 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.163 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.163 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.163 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.163 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.164 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.164 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.164 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.164 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.165 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.165 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.165 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.165 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T10:58:25.163404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.165 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.165 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.bytes volume: 2258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.166 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes volume: 7478 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.166 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes volume: 1751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.166 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.166 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T10:58:25.165794) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.167 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.167 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.167 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.167 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.167 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.167 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.167 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.167 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.168 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.168 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T10:58:25.167412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.168 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.168 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.168 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.168 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.169 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.169 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.169 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.169 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.169 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.170 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.170 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.170 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.170 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.170 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.170 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.170 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T10:58:25.170726) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.170 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.bytes.delta volume: 126 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.171 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.171 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.171 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.171 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.172 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.172 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.172 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.172 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.172 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.172 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.173 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.173 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T10:58:25.172317) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.173 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.173 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.173 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.173 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.173 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.173 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq>]
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.174 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.174 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.174 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.174 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.174 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-09T10:58:25.173409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T10:58:25.174368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets volume: 55 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.176 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.176 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.176 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.176 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.176 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.177 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.177 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.177 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T10:58:25.175459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.177 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.177 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.177 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T10:58:25.177086) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.178 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.178 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.179 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.179 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.179 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.180 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.180 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.180 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.180 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.180 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.180 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.180 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.bytes.delta volume: 507 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.181 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes.delta volume: 2474 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.179 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T10:58:25.179136) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.181 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.181 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T10:58:25.180877) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/memory.usage volume: 48.98046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.183 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/memory.usage volume: 49.72265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.183 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T10:58:25.182415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.183 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.184 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.184 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.184 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.184 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.184 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.184 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-09T10:58:25.184235) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.184 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq>]
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 10:58:26 compute-0 podman[244108]: 2025-12-09 10:58:26.94099891 +0000 UTC m=+0.086039118 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_id=edpm, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=)
Dec 09 10:58:27 compute-0 nova_compute[189493]: 2025-12-09 10:58:27.332 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:27 compute-0 sshd-session[244128]: Invalid user developer from 159.223.8.217 port 38582
Dec 09 10:58:27 compute-0 sshd-session[244128]: Connection closed by invalid user developer 159.223.8.217 port 38582 [preauth]
Dec 09 10:58:27 compute-0 nova_compute[189493]: 2025-12-09 10:58:27.964 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:29 compute-0 podman[244130]: 2025-12-09 10:58:29.02219508 +0000 UTC m=+0.163159339 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:58:29 compute-0 podman[203687]: time="2025-12-09T10:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:58:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:58:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Dec 09 10:58:29 compute-0 nova_compute[189493]: 2025-12-09 10:58:29.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:58:30 compute-0 podman[244155]: 2025-12-09 10:58:30.957313586 +0000 UTC m=+0.106272346 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 10:58:31 compute-0 openstack_network_exporter[205823]: ERROR   10:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:58:31 compute-0 openstack_network_exporter[205823]: ERROR   10:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:58:31 compute-0 openstack_network_exporter[205823]: ERROR   10:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:58:31 compute-0 openstack_network_exporter[205823]: ERROR   10:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:58:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:58:31 compute-0 openstack_network_exporter[205823]: ERROR   10:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:58:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:58:31 compute-0 nova_compute[189493]: 2025-12-09 10:58:31.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:58:31 compute-0 nova_compute[189493]: 2025-12-09 10:58:31.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:58:32 compute-0 nova_compute[189493]: 2025-12-09 10:58:32.336 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:32 compute-0 nova_compute[189493]: 2025-12-09 10:58:32.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:58:32 compute-0 nova_compute[189493]: 2025-12-09 10:58:32.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:58:32 compute-0 nova_compute[189493]: 2025-12-09 10:58:32.969 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:34 compute-0 nova_compute[189493]: 2025-12-09 10:58:34.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:58:34 compute-0 nova_compute[189493]: 2025-12-09 10:58:34.845 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:58:35 compute-0 nova_compute[189493]: 2025-12-09 10:58:35.425 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:58:35 compute-0 nova_compute[189493]: 2025-12-09 10:58:35.426 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:58:35 compute-0 nova_compute[189493]: 2025-12-09 10:58:35.426 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.781 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updating instance_info_cache with network_info: [{"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.803 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.803 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.803 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.803 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.834 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.835 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.835 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.836 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.979 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.065 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.076 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.169 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.169 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.253 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.254 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.331 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.340 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.345 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.404 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.406 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.466 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.467 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.524 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.525 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.598 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.618 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.691 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.693 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.789 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.792 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.908 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.911 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.974 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.992 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.002 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.087 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.088 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.150 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.151 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.211 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.215 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.279 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.711 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.713 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4612MB free_disk=72.11589431762695GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.713 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.714 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.995 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.995 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.996 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 32dd7fb0-7003-48cc-b688-4b94946c911f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.996 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.996 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.997 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:58:39 compute-0 nova_compute[189493]: 2025-12-09 10:58:39.126 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:58:39 compute-0 nova_compute[189493]: 2025-12-09 10:58:39.146 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:58:39 compute-0 nova_compute[189493]: 2025-12-09 10:58:39.245 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:58:39 compute-0 nova_compute[189493]: 2025-12-09 10:58:39.247 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:58:39 compute-0 podman[244229]: 2025-12-09 10:58:39.971680586 +0000 UTC m=+0.109167473 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251202, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 09 10:58:42 compute-0 nova_compute[189493]: 2025-12-09 10:58:42.342 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:42 compute-0 nova_compute[189493]: 2025-12-09 10:58:42.977 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:43 compute-0 nova_compute[189493]: 2025-12-09 10:58:43.288 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:58:43 compute-0 nova_compute[189493]: 2025-12-09 10:58:43.289 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:58:43 compute-0 podman[244248]: 2025-12-09 10:58:43.971385769 +0000 UTC m=+0.112146172 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 09 10:58:47 compute-0 nova_compute[189493]: 2025-12-09 10:58:47.345 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:47 compute-0 nova_compute[189493]: 2025-12-09 10:58:47.980 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:48 compute-0 podman[244272]: 2025-12-09 10:58:48.00108253 +0000 UTC m=+0.131854466 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:58:48 compute-0 podman[244271]: 2025-12-09 10:58:48.010388067 +0000 UTC m=+0.146023613 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc.)
Dec 09 10:58:51 compute-0 podman[244311]: 2025-12-09 10:58:51.93328937 +0000 UTC m=+0.079096204 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 09 10:58:51 compute-0 podman[244312]: 2025-12-09 10:58:51.943503701 +0000 UTC m=+0.090538698 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec 09 10:58:52 compute-0 nova_compute[189493]: 2025-12-09 10:58:52.348 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:52 compute-0 nova_compute[189493]: 2025-12-09 10:58:52.984 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:56 compute-0 sshd-session[244348]: Invalid user developer from 159.223.8.217 port 54114
Dec 09 10:58:56 compute-0 sshd-session[244348]: Connection closed by invalid user developer 159.223.8.217 port 54114 [preauth]
Dec 09 10:58:57 compute-0 nova_compute[189493]: 2025-12-09 10:58:57.350 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:57 compute-0 podman[244350]: 2025-12-09 10:58:57.958981905 +0000 UTC m=+0.106737969 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 09 10:58:57 compute-0 nova_compute[189493]: 2025-12-09 10:58:57.987 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:58:59 compute-0 podman[203687]: time="2025-12-09T10:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:58:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:58:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Dec 09 10:59:00 compute-0 podman[244371]: 2025-12-09 10:59:00.04530757 +0000 UTC m=+0.178645299 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:59:01 compute-0 openstack_network_exporter[205823]: ERROR   10:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:59:01 compute-0 openstack_network_exporter[205823]: ERROR   10:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:59:01 compute-0 openstack_network_exporter[205823]: ERROR   10:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:59:01 compute-0 openstack_network_exporter[205823]: ERROR   10:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:59:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:59:01 compute-0 openstack_network_exporter[205823]: ERROR   10:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:59:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:59:01 compute-0 podman[244396]: 2025-12-09 10:59:01.985177294 +0000 UTC m=+0.133276945 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 10:59:02 compute-0 nova_compute[189493]: 2025-12-09 10:59:02.353 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:02 compute-0 nova_compute[189493]: 2025-12-09 10:59:02.991 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:07 compute-0 nova_compute[189493]: 2025-12-09 10:59:07.356 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:07 compute-0 nova_compute[189493]: 2025-12-09 10:59:07.995 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:11 compute-0 podman[244420]: 2025-12-09 10:59:11.00006612 +0000 UTC m=+0.135829953 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 09 10:59:12 compute-0 nova_compute[189493]: 2025-12-09 10:59:12.359 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:12 compute-0 nova_compute[189493]: 2025-12-09 10:59:12.998 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:14 compute-0 podman[244439]: 2025-12-09 10:59:14.786819782 +0000 UTC m=+0.089326766 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 09 10:59:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:59:16.991 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:59:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:59:16.991 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:59:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:59:16.992 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:59:17 compute-0 nova_compute[189493]: 2025-12-09 10:59:17.361 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:18 compute-0 nova_compute[189493]: 2025-12-09 10:59:18.002 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:18 compute-0 podman[244463]: 2025-12-09 10:59:18.926403114 +0000 UTC m=+0.079828133 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, version=9.4, com.redhat.component=ubi9-container, vcs-type=git, build-date=2024-09-18T21:23:30, config_id=edpm, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, container_name=kepler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec 09 10:59:18 compute-0 podman[244464]: 2025-12-09 10:59:18.946325963 +0000 UTC m=+0.081960509 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 09 10:59:22 compute-0 nova_compute[189493]: 2025-12-09 10:59:22.364 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:22 compute-0 podman[244501]: 2025-12-09 10:59:22.951260247 +0000 UTC m=+0.095031458 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true)
Dec 09 10:59:22 compute-0 podman[244502]: 2025-12-09 10:59:22.974664358 +0000 UTC m=+0.112402829 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Dec 09 10:59:23 compute-0 nova_compute[189493]: 2025-12-09 10:59:23.005 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:26 compute-0 sshd-session[244538]: Invalid user developer from 159.223.8.217 port 46576
Dec 09 10:59:26 compute-0 sshd-session[244538]: Connection closed by invalid user developer 159.223.8.217 port 46576 [preauth]
Dec 09 10:59:27 compute-0 nova_compute[189493]: 2025-12-09 10:59:27.367 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:28 compute-0 nova_compute[189493]: 2025-12-09 10:59:28.009 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:28 compute-0 podman[244540]: 2025-12-09 10:59:28.97500678 +0000 UTC m=+0.116018305 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Dec 09 10:59:29 compute-0 podman[203687]: time="2025-12-09T10:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:59:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:59:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Dec 09 10:59:30 compute-0 nova_compute[189493]: 2025-12-09 10:59:30.844 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:59:31 compute-0 podman[244561]: 2025-12-09 10:59:31.022928914 +0000 UTC m=+0.160462096 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 10:59:31 compute-0 openstack_network_exporter[205823]: ERROR   10:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:59:31 compute-0 openstack_network_exporter[205823]: ERROR   10:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 10:59:31 compute-0 openstack_network_exporter[205823]: ERROR   10:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 10:59:31 compute-0 openstack_network_exporter[205823]: ERROR   10:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 10:59:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:59:31 compute-0 openstack_network_exporter[205823]: ERROR   10:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 10:59:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 10:59:31 compute-0 nova_compute[189493]: 2025-12-09 10:59:31.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:59:32 compute-0 nova_compute[189493]: 2025-12-09 10:59:32.368 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:32 compute-0 nova_compute[189493]: 2025-12-09 10:59:32.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:59:32 compute-0 nova_compute[189493]: 2025-12-09 10:59:32.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:59:32 compute-0 podman[244586]: 2025-12-09 10:59:32.994018407 +0000 UTC m=+0.127095570 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 10:59:33 compute-0 nova_compute[189493]: 2025-12-09 10:59:33.012 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:33 compute-0 nova_compute[189493]: 2025-12-09 10:59:33.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:59:35 compute-0 nova_compute[189493]: 2025-12-09 10:59:35.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:59:35 compute-0 nova_compute[189493]: 2025-12-09 10:59:35.871 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:59:36 compute-0 nova_compute[189493]: 2025-12-09 10:59:36.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:59:36 compute-0 nova_compute[189493]: 2025-12-09 10:59:36.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 10:59:36 compute-0 nova_compute[189493]: 2025-12-09 10:59:36.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 10:59:37 compute-0 nova_compute[189493]: 2025-12-09 10:59:37.370 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:37 compute-0 nova_compute[189493]: 2025-12-09 10:59:37.466 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 10:59:37 compute-0 nova_compute[189493]: 2025-12-09 10:59:37.467 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 10:59:37 compute-0 nova_compute[189493]: 2025-12-09 10:59:37.467 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 10:59:37 compute-0 nova_compute[189493]: 2025-12-09 10:59:37.468 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 10:59:38 compute-0 nova_compute[189493]: 2025-12-09 10:59:38.014 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.519 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.534 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.535 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.535 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.576 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.577 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.577 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.578 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.713 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.802 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.803 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.859 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.860 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.962 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.963 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.055 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.066 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.160 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.161 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.220 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.222 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.295 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.296 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.359 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.366 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.434 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.435 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.498 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.499 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.579 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.580 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.637 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.646 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.704 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.707 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.772 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.774 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.834 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.835 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 10:59:40 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 09 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.925 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.390 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.391 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4622MB free_disk=72.11589431762695GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.392 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.392 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.601 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.601 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.603 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 32dd7fb0-7003-48cc-b688-4b94946c911f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.603 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.603 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.603 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.679 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing inventories for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 09 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.757 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating ProviderTree inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 09 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.758 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 09 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.784 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing aggregate associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 09 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.807 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing trait associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX2,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 09 10:59:41 compute-0 podman[244657]: 2025-12-09 10:59:41.955582564 +0000 UTC m=+0.093538348 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 10:59:42 compute-0 nova_compute[189493]: 2025-12-09 10:59:42.116 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 10:59:42 compute-0 nova_compute[189493]: 2025-12-09 10:59:42.138 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 10:59:42 compute-0 nova_compute[189493]: 2025-12-09 10:59:42.143 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 10:59:42 compute-0 nova_compute[189493]: 2025-12-09 10:59:42.143 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:59:42 compute-0 nova_compute[189493]: 2025-12-09 10:59:42.144 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:59:42 compute-0 nova_compute[189493]: 2025-12-09 10:59:42.372 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:43 compute-0 nova_compute[189493]: 2025-12-09 10:59:43.017 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:43 compute-0 nova_compute[189493]: 2025-12-09 10:59:43.466 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:59:43 compute-0 nova_compute[189493]: 2025-12-09 10:59:43.467 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 10:59:43 compute-0 nova_compute[189493]: 2025-12-09 10:59:43.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:59:43 compute-0 nova_compute[189493]: 2025-12-09 10:59:43.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 09 10:59:44 compute-0 podman[244676]: 2025-12-09 10:59:44.943065637 +0000 UTC m=+0.095657704 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 09 10:59:45 compute-0 nova_compute[189493]: 2025-12-09 10:59:45.875 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:59:45 compute-0 nova_compute[189493]: 2025-12-09 10:59:45.875 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 09 10:59:45 compute-0 nova_compute[189493]: 2025-12-09 10:59:45.888 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 09 10:59:47 compute-0 nova_compute[189493]: 2025-12-09 10:59:47.376 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:48 compute-0 nova_compute[189493]: 2025-12-09 10:59:48.020 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.508 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.535 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Triggering sync for uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 09 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.535 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Triggering sync for uuid 1bddf2bf-8932-4428-97d7-7342a7ec414b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 09 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.535 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Triggering sync for uuid 32dd7fb0-7003-48cc-b688-4b94946c911f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 09 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.536 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Triggering sync for uuid 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 09 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.536 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.537 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.537 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.537 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.538 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.538 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.538 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.539 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.576 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.582 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.637 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.642 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 10:59:49 compute-0 podman[244698]: 2025-12-09 10:59:49.950186334 +0000 UTC m=+0.090725503 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vendor=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, version=9.4, config_id=edpm, io.buildah.version=1.29.0, name=ubi9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Dec 09 10:59:49 compute-0 podman[244699]: 2025-12-09 10:59:49.972448826 +0000 UTC m=+0.109032149 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 09 10:59:52 compute-0 nova_compute[189493]: 2025-12-09 10:59:52.379 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:53 compute-0 nova_compute[189493]: 2025-12-09 10:59:53.024 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:53 compute-0 podman[244734]: 2025-12-09 10:59:53.966380396 +0000 UTC m=+0.103509553 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 10:59:53 compute-0 podman[244736]: 2025-12-09 10:59:53.976398713 +0000 UTC m=+0.116402366 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 09 10:59:54 compute-0 sshd-session[244735]: Invalid user developer from 159.223.8.217 port 58168
Dec 09 10:59:54 compute-0 sshd-session[244735]: Connection closed by invalid user developer 159.223.8.217 port 58168 [preauth]
Dec 09 10:59:57 compute-0 nova_compute[189493]: 2025-12-09 10:59:57.382 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:58 compute-0 nova_compute[189493]: 2025-12-09 10:59:58.029 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 10:59:59 compute-0 podman[203687]: time="2025-12-09T10:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 10:59:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 10:59:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec 09 10:59:59 compute-0 podman[244773]: 2025-12-09 10:59:59.96719005 +0000 UTC m=+0.112607734 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=)
Dec 09 11:00:01 compute-0 openstack_network_exporter[205823]: ERROR   11:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:00:01 compute-0 openstack_network_exporter[205823]: ERROR   11:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:00:01 compute-0 openstack_network_exporter[205823]: ERROR   11:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:00:01 compute-0 openstack_network_exporter[205823]: ERROR   11:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:00:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:00:01 compute-0 openstack_network_exporter[205823]: ERROR   11:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:00:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:00:01 compute-0 podman[244791]: 2025-12-09 11:00:01.996004648 +0000 UTC m=+0.148734926 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 09 11:00:02 compute-0 nova_compute[189493]: 2025-12-09 11:00:02.386 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:03 compute-0 nova_compute[189493]: 2025-12-09 11:00:03.033 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:03 compute-0 podman[244816]: 2025-12-09 11:00:03.935993062 +0000 UTC m=+0.083791738 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 11:00:07 compute-0 nova_compute[189493]: 2025-12-09 11:00:07.389 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:08 compute-0 nova_compute[189493]: 2025-12-09 11:00:08.037 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:12 compute-0 nova_compute[189493]: 2025-12-09 11:00:12.394 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:12 compute-0 podman[244839]: 2025-12-09 11:00:12.980638548 +0000 UTC m=+0.128683812 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 09 11:00:13 compute-0 nova_compute[189493]: 2025-12-09 11:00:13.042 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:15 compute-0 podman[244858]: 2025-12-09 11:00:15.915181734 +0000 UTC m=+0.065866522 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 09 11:00:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:00:16.992 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:00:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:00:16.992 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:00:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:00:16.993 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:00:17 compute-0 nova_compute[189493]: 2025-12-09 11:00:17.395 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:18 compute-0 nova_compute[189493]: 2025-12-09 11:00:18.045 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:20 compute-0 podman[244887]: 2025-12-09 11:00:20.992447016 +0000 UTC m=+0.115367658 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Dec 09 11:00:21 compute-0 podman[244886]: 2025-12-09 11:00:21.007021483 +0000 UTC m=+0.135131223 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.openshift.tags=base rhel9, release-0.7.12=, config_id=edpm, name=ubi9, architecture=x86_64, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 09 11:00:21 compute-0 sshd-session[244923]: Invalid user developer from 159.223.8.217 port 40918
Dec 09 11:00:22 compute-0 sshd-session[244923]: Connection closed by invalid user developer 159.223.8.217 port 40918 [preauth]
Dec 09 11:00:22 compute-0 nova_compute[189493]: 2025-12-09 11:00:22.397 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:23 compute-0 nova_compute[189493]: 2025-12-09 11:00:23.048 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.294 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.295 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.311 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '32dd7fb0-7003-48cc-b688-4b94946c911f', 'name': 'vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.318 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1bddf2bf-8932-4428-97d7-7342a7ec414b', 'name': 'vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.325 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7b43ca09-ed65-4465-9fcc-95caa6dc9a88', 'name': 'vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.331 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.332 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.333 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T11:00:23.333325) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.342 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.348 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes volume: 8406 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.354 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.361 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.363 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.363 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.364 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.364 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T11:00:23.364528) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.407 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.407 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.408 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.451 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.452 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.453 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.498 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.498 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.498 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.538 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.538 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.538 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.539 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.540 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.540 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.540 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.540 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.540 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets volume: 66 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.540 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.541 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.541 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.542 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.542 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.542 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.542 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.542 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.542 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.542 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.543 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.543 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.543 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.543 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.544 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.544 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T11:00:23.540289) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.545 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T11:00:23.542340) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.545 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T11:00:23.544121) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.645 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.646 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.646 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.741 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.742 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.742 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.858 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.859 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.860 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.961 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.962 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.964 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.966 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.967 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.967 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.968 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.968 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.969 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T11:00:23.968434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.969 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.970 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.970 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.971 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.971 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.972 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.972 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.972 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.972 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.973 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T11:00:23.973043) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.973 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.974 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 386883662 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.974 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 91523197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.975 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 560654086 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.975 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 439593872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.976 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 92612690 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.976 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 59905939 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.977 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 492966519 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.977 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 88653492 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.978 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 59040938 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.978 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.979 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.979 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.980 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.981 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.981 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.981 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.982 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.982 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T11:00:23.981831) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.982 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.983 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.983 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.984 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.984 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.985 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.985 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.985 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.986 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.986 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.987 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.987 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.988 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.989 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.989 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.989 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.990 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T11:00:23.989864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.990 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.991 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.991 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.992 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.992 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.993 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.993 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.993 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.994 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.994 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.995 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.995 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.996 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.997 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.997 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.997 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.999 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T11:00:23.998087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.998 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.033 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/cpu volume: 33290000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.067 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/cpu volume: 390330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.110 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/cpu volume: 34450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.140 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 43510000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.142 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.143 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.143 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.143 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.144 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.144 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.144 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.145 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T11:00:24.144268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.145 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.146 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.146 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.147 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.147 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.148 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.148 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.148 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.149 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.149 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.149 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.150 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.151 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.151 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.151 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.151 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.151 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.152 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.152 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.152 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.153 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.153 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.154 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.154 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.155 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.155 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.156 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.156 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.156 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.157 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.158 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.158 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.159 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T11:00:24.151821) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.159 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 1670377851 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.159 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T11:00:24.159197) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.159 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 9651641 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.160 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.160 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 2122486229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.161 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 13222286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.161 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.161 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 2223058984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.162 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 10632793 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.162 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.163 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.163 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.163 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.164 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.165 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.165 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.165 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.165 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.165 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.165 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.166 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.166 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.167 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T11:00:24.165647) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.167 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.167 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.168 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.168 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.168 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.168 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.168 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.168 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.169 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes volume: 7548 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.169 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.170 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.170 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.171 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.171 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.171 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.171 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.171 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.171 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.172 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.172 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.173 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.173 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.173 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.174 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.174 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.175 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.175 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.176 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.176 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.179 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.179 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.180 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.180 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.180 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.181 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.181 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.181 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.182 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.182 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.182 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.183 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.183 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.183 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.183 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.184 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.184 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.184 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.184 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.184 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.184 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.185 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.185 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets volume: 55 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.185 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.185 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.186 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.186 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.186 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.186 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.187 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.187 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.187 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.187 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.187 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.188 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.188 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.188 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.189 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.189 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T11:00:24.168747) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.189 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.189 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.189 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.190 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.190 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.191 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T11:00:24.171747) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.191 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.192 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.192 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.192 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes.delta volume: 465 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.192 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T11:00:24.179491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T11:00:24.181659) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.194 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/memory.usage volume: 48.97265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.194 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/memory.usage volume: 49.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.194 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.194 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.195 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T11:00:24.183537) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T11:00:24.184959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T11:00:24.187091) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T11:00:24.189390) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T11:00:24.191906) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T11:00:24.193816) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:00:24 compute-0 podman[244926]: 2025-12-09 11:00:24.923237607 +0000 UTC m=+0.071715418 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 09 11:00:24 compute-0 podman[244927]: 2025-12-09 11:00:24.955355371 +0000 UTC m=+0.103639927 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm)
Dec 09 11:00:27 compute-0 nova_compute[189493]: 2025-12-09 11:00:27.399 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:28 compute-0 nova_compute[189493]: 2025-12-09 11:00:28.051 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:29 compute-0 podman[203687]: time="2025-12-09T11:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:00:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 11:00:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Dec 09 11:00:30 compute-0 podman[244960]: 2025-12-09 11:00:30.984505449 +0000 UTC m=+0.121843961 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 09 11:00:31 compute-0 openstack_network_exporter[205823]: ERROR   11:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:00:31 compute-0 openstack_network_exporter[205823]: ERROR   11:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:00:31 compute-0 openstack_network_exporter[205823]: ERROR   11:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:00:31 compute-0 openstack_network_exporter[205823]: ERROR   11:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:00:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:00:31 compute-0 openstack_network_exporter[205823]: ERROR   11:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:00:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:00:31 compute-0 nova_compute[189493]: 2025-12-09 11:00:31.872 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:00:32 compute-0 nova_compute[189493]: 2025-12-09 11:00:32.403 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:32 compute-0 nova_compute[189493]: 2025-12-09 11:00:32.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:00:32 compute-0 nova_compute[189493]: 2025-12-09 11:00:32.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:00:32 compute-0 podman[244980]: 2025-12-09 11:00:32.989948644 +0000 UTC m=+0.133113060 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 09 11:00:33 compute-0 nova_compute[189493]: 2025-12-09 11:00:33.054 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:33 compute-0 nova_compute[189493]: 2025-12-09 11:00:33.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:00:34 compute-0 nova_compute[189493]: 2025-12-09 11:00:34.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:00:34 compute-0 podman[245006]: 2025-12-09 11:00:34.950300871 +0000 UTC m=+0.101379666 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 11:00:36 compute-0 nova_compute[189493]: 2025-12-09 11:00:36.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:00:37 compute-0 nova_compute[189493]: 2025-12-09 11:00:37.407 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:37 compute-0 nova_compute[189493]: 2025-12-09 11:00:37.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:00:37 compute-0 nova_compute[189493]: 2025-12-09 11:00:37.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 11:00:38 compute-0 nova_compute[189493]: 2025-12-09 11:00:38.058 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:38 compute-0 nova_compute[189493]: 2025-12-09 11:00:38.541 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 11:00:38 compute-0 nova_compute[189493]: 2025-12-09 11:00:38.542 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 11:00:38 compute-0 nova_compute[189493]: 2025-12-09 11:00:38.542 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.163 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.182 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.183 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.185 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.220 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.221 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.222 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.222 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.312 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.398 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.399 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.487 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.490 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.556 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.557 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.615 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.621 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.681 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.684 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.753 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.754 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.856 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.858 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.950 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.962 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.044 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.046 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.161 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.163 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.257 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.260 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.360 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.373 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.475 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.477 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.538 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.539 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.649 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.652 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.740 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.266 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.268 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4620MB free_disk=72.11589431762695GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.268 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.269 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.409 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.435 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.436 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.436 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 32dd7fb0-7003-48cc-b688-4b94946c911f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.436 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.436 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.437 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.560 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.582 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.584 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.585 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:00:43 compute-0 nova_compute[189493]: 2025-12-09 11:00:43.063 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:43 compute-0 podman[245076]: 2025-12-09 11:00:43.954285664 +0000 UTC m=+0.101896709 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 09 11:00:45 compute-0 nova_compute[189493]: 2025-12-09 11:00:45.243 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:00:45 compute-0 nova_compute[189493]: 2025-12-09 11:00:45.243 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 11:00:46 compute-0 podman[245096]: 2025-12-09 11:00:46.931037323 +0000 UTC m=+0.076660859 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 11:00:47 compute-0 nova_compute[189493]: 2025-12-09 11:00:47.413 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:48 compute-0 nova_compute[189493]: 2025-12-09 11:00:48.065 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:50 compute-0 sshd-session[245120]: Invalid user developer from 159.223.8.217 port 40632
Dec 09 11:00:50 compute-0 sshd-session[245120]: Connection closed by invalid user developer 159.223.8.217 port 40632 [preauth]
Dec 09 11:00:51 compute-0 podman[245123]: 2025-12-09 11:00:51.969227785 +0000 UTC m=+0.096509377 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 09 11:00:51 compute-0 podman[245122]: 2025-12-09 11:00:51.978267626 +0000 UTC m=+0.108755053 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, architecture=x86_64, io.openshift.expose-services=)
Dec 09 11:00:52 compute-0 nova_compute[189493]: 2025-12-09 11:00:52.416 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:53 compute-0 nova_compute[189493]: 2025-12-09 11:00:53.069 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:55 compute-0 podman[245158]: 2025-12-09 11:00:55.955602733 +0000 UTC m=+0.088647497 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec 09 11:00:55 compute-0 podman[245157]: 2025-12-09 11:00:55.961895271 +0000 UTC m=+0.101762527 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 09 11:00:57 compute-0 nova_compute[189493]: 2025-12-09 11:00:57.418 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:58 compute-0 nova_compute[189493]: 2025-12-09 11:00:58.073 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:00:59 compute-0 podman[203687]: time="2025-12-09T11:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:00:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 11:00:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Dec 09 11:01:01 compute-0 CROND[245195]: (root) CMD (run-parts /etc/cron.hourly)
Dec 09 11:01:01 compute-0 run-parts[245198]: (/etc/cron.hourly) starting 0anacron
Dec 09 11:01:01 compute-0 run-parts[245204]: (/etc/cron.hourly) finished 0anacron
Dec 09 11:01:01 compute-0 CROND[245194]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 09 11:01:01 compute-0 openstack_network_exporter[205823]: ERROR   11:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:01:01 compute-0 openstack_network_exporter[205823]: ERROR   11:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:01:01 compute-0 openstack_network_exporter[205823]: ERROR   11:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:01:01 compute-0 openstack_network_exporter[205823]: ERROR   11:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:01:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:01:01 compute-0 openstack_network_exporter[205823]: ERROR   11:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:01:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:01:02 compute-0 podman[245205]: 2025-12-09 11:01:02.0031699 +0000 UTC m=+0.134043984 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 09 11:01:02 compute-0 nova_compute[189493]: 2025-12-09 11:01:02.421 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:03 compute-0 nova_compute[189493]: 2025-12-09 11:01:03.076 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:03 compute-0 podman[245225]: 2025-12-09 11:01:03.978346972 +0000 UTC m=+0.134415905 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 11:01:05 compute-0 podman[245250]: 2025-12-09 11:01:05.975144723 +0000 UTC m=+0.119622213 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 11:01:07 compute-0 nova_compute[189493]: 2025-12-09 11:01:07.426 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:08 compute-0 nova_compute[189493]: 2025-12-09 11:01:08.079 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:12 compute-0 nova_compute[189493]: 2025-12-09 11:01:12.428 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:13 compute-0 nova_compute[189493]: 2025-12-09 11:01:13.083 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:14 compute-0 podman[245272]: 2025-12-09 11:01:14.834372949 +0000 UTC m=+0.117189167 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 09 11:01:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:16.993 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:01:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:16.995 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:01:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:16.996 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:01:17 compute-0 nova_compute[189493]: 2025-12-09 11:01:17.431 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:17 compute-0 podman[245292]: 2025-12-09 11:01:17.921003045 +0000 UTC m=+0.077764925 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 11:01:18 compute-0 nova_compute[189493]: 2025-12-09 11:01:18.087 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:18 compute-0 sshd-session[245316]: Invalid user developer from 159.223.8.217 port 50868
Dec 09 11:01:19 compute-0 sshd-session[245316]: Connection closed by invalid user developer 159.223.8.217 port 50868 [preauth]
Dec 09 11:01:22 compute-0 nova_compute[189493]: 2025-12-09 11:01:22.432 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:22 compute-0 podman[245319]: 2025-12-09 11:01:22.973008178 +0000 UTC m=+0.104000986 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, version=9.4, architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Dec 09 11:01:23 compute-0 podman[245320]: 2025-12-09 11:01:23.000208813 +0000 UTC m=+0.133275077 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec 09 11:01:23 compute-0 nova_compute[189493]: 2025-12-09 11:01:23.093 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.760 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.761 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.761 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.762 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.762 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.764 189497 INFO nova.compute.manager [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Terminating instance
Dec 09 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.766 189497 DEBUG nova.compute.manager [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 09 11:01:24 compute-0 kernel: tap7819acf8-da (unregistering): left promiscuous mode
Dec 09 11:01:24 compute-0 NetworkManager[56302]: <info>  [1765278084.8266] device (tap7819acf8-da): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 09 11:01:24 compute-0 ovn_controller[97780]: 2025-12-09T11:01:24Z|00050|binding|INFO|Releasing lport 7819acf8-daa2-4391-96d4-ef33c260f794 from this chassis (sb_readonly=0)
Dec 09 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.839 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:24 compute-0 ovn_controller[97780]: 2025-12-09T11:01:24Z|00051|binding|INFO|Setting lport 7819acf8-daa2-4391-96d4-ef33c260f794 down in Southbound
Dec 09 11:01:24 compute-0 ovn_controller[97780]: 2025-12-09T11:01:24Z|00052|binding|INFO|Removing iface tap7819acf8-da ovn-installed in OVS
Dec 09 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.846 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:24.849 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:4e:b4 192.168.0.212'], port_security=['fa:16:3e:01:4e:b4 192.168.0.212'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5eiooafn7y6w-x2vp5udxgoax-du67okrzyrz6-port-copozzjp5fc5', 'neutron:cidrs': '192.168.0.212/24', 'neutron:device_id': '1bddf2bf-8932-4428-97d7-7342a7ec414b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5af7354-5afe-400a-9e13-5500648117d8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5eiooafn7y6w-x2vp5udxgoax-du67okrzyrz6-port-copozzjp5fc5', 'neutron:project_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd86dfae4-cfd5-480d-a50e-0084326b1439', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.172', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61df917c-633f-4b35-857d-39fd859caf35, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], logical_port=7819acf8-daa2-4391-96d4-ef33c260f794) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 11:01:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:24.850 106644 INFO neutron.agent.ovn.metadata.agent [-] Port 7819acf8-daa2-4391-96d4-ef33c260f794 in datapath c5af7354-5afe-400a-9e13-5500648117d8 unbound from our chassis
Dec 09 11:01:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:24.851 106644 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5af7354-5afe-400a-9e13-5500648117d8
Dec 09 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.858 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:24.872 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[83df7a6c-9dfb-4299-a236-90edb6ab6ad6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:01:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:24.914 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[ac87543e-8d39-412c-93e2-335d69c99c3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:01:24 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec 09 11:01:24 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 7min 56.864s CPU time.
Dec 09 11:01:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:24.918 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7f405e-cfa7-4aac-83ef-0d72d6161245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:01:24 compute-0 systemd-machined[155790]: Machine qemu-2-instance-00000002 terminated.
Dec 09 11:01:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:24.960 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[200384ca-ab65-428a-b8e4-ac36a4da5fa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:01:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:24.995 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[47d61d55-90e6-4e57-b60a-2b6d3e21e3b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5af7354-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:0d:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 11, 'rx_bytes': 742, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 11, 'rx_bytes': 742, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396027, 'reachable_time': 18085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245370, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:01:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:25.020 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[8be8e4ef-9ee3-469c-abbf-375452dbcb5e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396043, 'tstamp': 396043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245377, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396046, 'tstamp': 396046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245377, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:01:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:25.024 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5af7354-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.026 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.033 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:25.033 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5af7354-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:01:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:25.033 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 09 11:01:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:25.034 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5af7354-50, col_values=(('external_ids', {'iface-id': '3eb47070-bc26-4827-a5a8-68152f05129c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:01:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:25.034 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.074 189497 INFO nova.virt.libvirt.driver [-] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Instance destroyed successfully.
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.075 189497 DEBUG nova.objects.instance [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'resources' on Instance uuid 1bddf2bf-8932-4428-97d7-7342a7ec414b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.084 189497 DEBUG nova.compute.manager [req-66470ac3-3910-49b0-b1a9-932eec934339 req-6d2b2f2d-d5e9-43b8-95c1-f2ffd6ec0f74 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received event network-vif-unplugged-7819acf8-daa2-4391-96d4-ef33c260f794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.085 189497 DEBUG oslo_concurrency.lockutils [req-66470ac3-3910-49b0-b1a9-932eec934339 req-6d2b2f2d-d5e9-43b8-95c1-f2ffd6ec0f74 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.085 189497 DEBUG oslo_concurrency.lockutils [req-66470ac3-3910-49b0-b1a9-932eec934339 req-6d2b2f2d-d5e9-43b8-95c1-f2ffd6ec0f74 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.085 189497 DEBUG oslo_concurrency.lockutils [req-66470ac3-3910-49b0-b1a9-932eec934339 req-6d2b2f2d-d5e9-43b8-95c1-f2ffd6ec0f74 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.086 189497 DEBUG nova.compute.manager [req-66470ac3-3910-49b0-b1a9-932eec934339 req-6d2b2f2d-d5e9-43b8-95c1-f2ffd6ec0f74 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] No waiting events found dispatching network-vif-unplugged-7819acf8-daa2-4391-96d4-ef33c260f794 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.086 189497 DEBUG nova.compute.manager [req-66470ac3-3910-49b0-b1a9-932eec934339 req-6d2b2f2d-d5e9-43b8-95c1-f2ffd6ec0f74 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received event network-vif-unplugged-7819acf8-daa2-4391-96d4-ef33c260f794 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.101 189497 DEBUG nova.virt.libvirt.vif [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-09T10:49:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l',id=2,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-09T10:50:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-ljrndswf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-09T10:50:04Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzE5NzQyNjYxODkxMjIwNjU2Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Dec 09 11:01:25 compute-0 nova_compute[189493]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzE5NzQyNjYxODkxMjIwNjU2Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0tLQo=',user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=1bddf2bf-8932-4428-97d7-7342a7ec414b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.102 189497 DEBUG nova.network.os_vif_util [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.103 189497 DEBUG nova.network.os_vif_util [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:4e:b4,bridge_name='br-int',has_traffic_filtering=True,id=7819acf8-daa2-4391-96d4-ef33c260f794,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7819acf8-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.104 189497 DEBUG os_vif [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:4e:b4,bridge_name='br-int',has_traffic_filtering=True,id=7819acf8-daa2-4391-96d4-ef33c260f794,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7819acf8-da') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.107 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.107 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7819acf8-da, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.109 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.111 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.120 189497 INFO os_vif [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:4e:b4,bridge_name='br-int',has_traffic_filtering=True,id=7819acf8-daa2-4391-96d4-ef33c260f794,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7819acf8-da')
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.121 189497 INFO nova.virt.libvirt.driver [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Deleting instance files /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b_del
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.122 189497 INFO nova.virt.libvirt.driver [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Deletion of /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b_del complete
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.226 189497 DEBUG nova.virt.libvirt.host [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.227 189497 INFO nova.virt.libvirt.host [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] UEFI support detected
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.230 189497 INFO nova.compute.manager [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Took 0.46 seconds to destroy the instance on the hypervisor.
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.231 189497 DEBUG oslo.service.loopingcall [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.231 189497 DEBUG nova.compute.manager [-] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.232 189497 DEBUG nova.network.neutron [-] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 09 11:01:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:25.294 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 11:01:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:25.295 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 09 11:01:25 compute-0 rsyslogd[236818]: message too long (8192) with configured size 8096, begin of message is: 2025-12-09 11:01:25.101 189497 DEBUG nova.virt.libvirt.vif [None req-e481dd27-e4 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.305 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:26 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:26.298 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:01:27 compute-0 podman[245394]: 2025-12-09 11:01:27.008494865 +0000 UTC m=+0.139507254 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 11:01:27 compute-0 podman[245395]: 2025-12-09 11:01:27.01393728 +0000 UTC m=+0.146054848 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.135 189497 DEBUG nova.network.neutron [-] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.170 189497 INFO nova.compute.manager [-] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Took 1.94 seconds to deallocate network for instance.
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.202 189497 DEBUG nova.compute.manager [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received event network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.203 189497 DEBUG oslo_concurrency.lockutils [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.204 189497 DEBUG oslo_concurrency.lockutils [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.204 189497 DEBUG oslo_concurrency.lockutils [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.205 189497 DEBUG nova.compute.manager [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] No waiting events found dispatching network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.206 189497 WARNING nova.compute.manager [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received unexpected event network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 for instance with vm_state active and task_state deleting.
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.207 189497 DEBUG nova.compute.manager [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received event network-changed-7819acf8-daa2-4391-96d4-ef33c260f794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.208 189497 DEBUG nova.compute.manager [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Refreshing instance network info cache due to event network-changed-7819acf8-daa2-4391-96d4-ef33c260f794. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.209 189497 DEBUG oslo_concurrency.lockutils [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.209 189497 DEBUG oslo_concurrency.lockutils [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquired lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.210 189497 DEBUG nova.network.neutron [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Refreshing network info cache for port 7819acf8-daa2-4391-96d4-ef33c260f794 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.242 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.243 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.378 189497 DEBUG nova.network.neutron [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.384 189497 DEBUG nova.compute.provider_tree [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.437 189497 DEBUG nova.scheduler.client.report [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.440 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.471 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.501 189497 INFO nova.scheduler.client.report [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Deleted allocations for instance 1bddf2bf-8932-4428-97d7-7342a7ec414b
Dec 09 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.572 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:01:28 compute-0 nova_compute[189493]: 2025-12-09 11:01:28.173 189497 DEBUG nova.network.neutron [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 11:01:28 compute-0 nova_compute[189493]: 2025-12-09 11:01:28.207 189497 DEBUG oslo_concurrency.lockutils [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Releasing lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 11:01:29 compute-0 sshd-session[245431]: Received disconnect from 80.94.93.233 port 44220:11:  [preauth]
Dec 09 11:01:29 compute-0 sshd-session[245431]: Disconnected from authenticating user root 80.94.93.233 port 44220 [preauth]
Dec 09 11:01:29 compute-0 podman[203687]: time="2025-12-09T11:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:01:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 11:01:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec 09 11:01:30 compute-0 nova_compute[189493]: 2025-12-09 11:01:30.110 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:31 compute-0 openstack_network_exporter[205823]: ERROR   11:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:01:31 compute-0 openstack_network_exporter[205823]: ERROR   11:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:01:31 compute-0 openstack_network_exporter[205823]: ERROR   11:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:01:31 compute-0 openstack_network_exporter[205823]: ERROR   11:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:01:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:01:31 compute-0 openstack_network_exporter[205823]: ERROR   11:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:01:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:01:31 compute-0 nova_compute[189493]: 2025-12-09 11:01:31.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:01:32 compute-0 nova_compute[189493]: 2025-12-09 11:01:32.438 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:32 compute-0 podman[245433]: 2025-12-09 11:01:32.991532712 +0000 UTC m=+0.131787847 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, architecture=x86_64, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, container_name=openstack_network_exporter, vendor=Red Hat, Inc.)
Dec 09 11:01:33 compute-0 nova_compute[189493]: 2025-12-09 11:01:33.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:01:34 compute-0 nova_compute[189493]: 2025-12-09 11:01:34.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:01:34 compute-0 nova_compute[189493]: 2025-12-09 11:01:34.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:01:35 compute-0 podman[245452]: 2025-12-09 11:01:35.017715158 +0000 UTC m=+0.148171425 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Dec 09 11:01:35 compute-0 nova_compute[189493]: 2025-12-09 11:01:35.113 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:36 compute-0 nova_compute[189493]: 2025-12-09 11:01:36.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:01:36 compute-0 nova_compute[189493]: 2025-12-09 11:01:36.867 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:01:36 compute-0 podman[245479]: 2025-12-09 11:01:36.949392913 +0000 UTC m=+0.089117560 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 11:01:37 compute-0 nova_compute[189493]: 2025-12-09 11:01:37.441 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:37 compute-0 nova_compute[189493]: 2025-12-09 11:01:37.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:01:38 compute-0 nova_compute[189493]: 2025-12-09 11:01:38.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:01:38 compute-0 nova_compute[189493]: 2025-12-09 11:01:38.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 11:01:39 compute-0 nova_compute[189493]: 2025-12-09 11:01:39.029 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 11:01:39 compute-0 nova_compute[189493]: 2025-12-09 11:01:39.030 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 11:01:39 compute-0 nova_compute[189493]: 2025-12-09 11:01:39.030 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 11:01:40 compute-0 nova_compute[189493]: 2025-12-09 11:01:40.069 189497 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765278085.0672002, 1bddf2bf-8932-4428-97d7-7342a7ec414b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 11:01:40 compute-0 nova_compute[189493]: 2025-12-09 11:01:40.069 189497 INFO nova.compute.manager [-] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] VM Stopped (Lifecycle Event)
Dec 09 11:01:40 compute-0 nova_compute[189493]: 2025-12-09 11:01:40.092 189497 DEBUG nova.compute.manager [None req-98271968-f645-40ff-a03b-c58badc34918 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 11:01:40 compute-0 nova_compute[189493]: 2025-12-09 11:01:40.117 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:42 compute-0 nova_compute[189493]: 2025-12-09 11:01:42.444 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.201 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updating instance_info_cache with network_info: [{"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.225 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.227 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.228 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.279 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.280 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.280 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.281 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.378 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.460 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.461 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.530 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.532 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.604 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.605 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.689 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.702 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.793 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.795 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.854 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.855 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.912 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.913 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.980 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.988 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.066 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.068 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.137 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.138 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.207 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.208 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.264 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.669 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.671 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4780MB free_disk=72.13796997070312GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.671 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.672 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.783 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.784 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 32dd7fb0-7003-48cc-b688-4b94946c911f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.784 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.785 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.785 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.923 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.944 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.976 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.977 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:01:45 compute-0 nova_compute[189493]: 2025-12-09 11:01:45.120 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:45 compute-0 nova_compute[189493]: 2025-12-09 11:01:45.591 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:01:45 compute-0 nova_compute[189493]: 2025-12-09 11:01:45.592 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 11:01:45 compute-0 podman[245538]: 2025-12-09 11:01:45.996631502 +0000 UTC m=+0.125467739 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 09 11:01:47 compute-0 sshd-session[245556]: Invalid user developer from 159.223.8.217 port 52802
Dec 09 11:01:47 compute-0 nova_compute[189493]: 2025-12-09 11:01:47.447 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:47 compute-0 sshd-session[245556]: Connection closed by invalid user developer 159.223.8.217 port 52802 [preauth]
Dec 09 11:01:48 compute-0 podman[245558]: 2025-12-09 11:01:48.963555163 +0000 UTC m=+0.114198977 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 11:01:50 compute-0 nova_compute[189493]: 2025-12-09 11:01:50.123 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:52 compute-0 nova_compute[189493]: 2025-12-09 11:01:52.451 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:53 compute-0 podman[245584]: 2025-12-09 11:01:53.929682843 +0000 UTC m=+0.087196118 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 11:01:53 compute-0 podman[245583]: 2025-12-09 11:01:53.981036034 +0000 UTC m=+0.128802398 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-type=git, io.openshift.tags=base rhel9, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, version=9.4, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 09 11:01:55 compute-0 nova_compute[189493]: 2025-12-09 11:01:55.125 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:57 compute-0 nova_compute[189493]: 2025-12-09 11:01:57.454 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:01:57 compute-0 podman[245623]: 2025-12-09 11:01:57.964268205 +0000 UTC m=+0.104293974 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec 09 11:01:57 compute-0 podman[245622]: 2025-12-09 11:01:57.989207261 +0000 UTC m=+0.133208775 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent)
Dec 09 11:01:59 compute-0 ovn_controller[97780]: 2025-12-09T11:01:59Z|00053|memory_trim|INFO|Detected inactivity (last active 30025 ms ago): trimming memory
Dec 09 11:01:59 compute-0 podman[203687]: time="2025-12-09T11:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:01:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 11:01:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec 09 11:02:00 compute-0 nova_compute[189493]: 2025-12-09 11:02:00.128 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:01 compute-0 openstack_network_exporter[205823]: ERROR   11:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:02:01 compute-0 openstack_network_exporter[205823]: ERROR   11:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:02:01 compute-0 openstack_network_exporter[205823]: ERROR   11:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:02:01 compute-0 openstack_network_exporter[205823]: ERROR   11:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:02:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:02:01 compute-0 openstack_network_exporter[205823]: ERROR   11:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:02:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:02:02 compute-0 nova_compute[189493]: 2025-12-09 11:02:02.456 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:03 compute-0 podman[245661]: 2025-12-09 11:02:03.977034797 +0000 UTC m=+0.131825538 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Dec 09 11:02:05 compute-0 nova_compute[189493]: 2025-12-09 11:02:05.131 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:05 compute-0 podman[245681]: 2025-12-09 11:02:05.984570557 +0000 UTC m=+0.132619049 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 09 11:02:07 compute-0 nova_compute[189493]: 2025-12-09 11:02:07.459 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:07 compute-0 podman[245709]: 2025-12-09 11:02:07.975428364 +0000 UTC m=+0.131510650 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 11:02:10 compute-0 nova_compute[189493]: 2025-12-09 11:02:10.134 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:12 compute-0 nova_compute[189493]: 2025-12-09 11:02:12.462 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:15 compute-0 nova_compute[189493]: 2025-12-09 11:02:15.139 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:16 compute-0 sshd-session[245732]: Invalid user developer from 159.223.8.217 port 35866
Dec 09 11:02:16 compute-0 sshd-session[245732]: Connection closed by invalid user developer 159.223.8.217 port 35866 [preauth]
Dec 09 11:02:16 compute-0 podman[245734]: 2025-12-09 11:02:16.799232287 +0000 UTC m=+0.099994630 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 11:02:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:02:16.994 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:02:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:02:16.995 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:02:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:02:16.996 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:02:17 compute-0 nova_compute[189493]: 2025-12-09 11:02:17.466 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:19 compute-0 podman[245755]: 2025-12-09 11:02:19.972563775 +0000 UTC m=+0.112609927 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 11:02:20 compute-0 nova_compute[189493]: 2025-12-09 11:02:20.142 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:21 compute-0 sshd-session[245779]: Connection closed by 196.251.100.74 port 50054
Dec 09 11:02:22 compute-0 nova_compute[189493]: 2025-12-09 11:02:22.467 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.295 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.295 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.303 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '32dd7fb0-7003-48cc-b688-4b94946c911f', 'name': 'vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.307 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7b43ca09-ed65-4465-9fcc-95caa6dc9a88', 'name': 'vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.310 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.310 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.311 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T11:02:23.311346) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.316 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.320 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.325 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2262 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.325 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.325 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.325 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.326 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.326 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.326 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T11:02:23.326057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.352 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.352 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.353 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.381 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.382 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.382 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.451 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.451 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.451 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.452 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.452 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.453 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.453 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.453 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.453 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.453 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.454 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.454 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.454 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.455 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.455 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.455 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.455 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.455 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.455 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.456 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.456 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.456 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.456 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.457 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.457 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T11:02:23.453261) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.458 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T11:02:23.455193) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.459 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T11:02:23.457048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.550 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.550 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.551 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.640 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.641 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.641 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.703 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.704 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.704 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.704 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.705 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.705 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.705 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.705 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.705 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.705 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.706 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.706 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.706 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.706 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.706 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.707 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 386883662 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.707 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 91523197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.707 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 560654086 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.707 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 492966519 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.707 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 88653492 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.708 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 59040938 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.708 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.708 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.708 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.706 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T11:02:23.705359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.709 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.709 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.709 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.709 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.709 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.710 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.710 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.710 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.710 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.710 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T11:02:23.706899) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.712 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.712 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T11:02:23.709328) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.712 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.713 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.713 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.713 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T11:02:23.711828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.713 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.714 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.714 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.715 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.715 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.715 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.716 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.716 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.717 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T11:02:23.716259) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.744 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/cpu volume: 35010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.770 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/cpu volume: 36170000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.799 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 45220000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.799 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.800 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.800 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.800 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.800 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.800 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.801 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T11:02:23.800607) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.801 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.803 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.803 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.804 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.805 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.805 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.806 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.807 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.808 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.808 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.809 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.809 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.810 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.810 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.811 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T11:02:23.810159) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.812 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.812 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.813 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.814 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.814 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.815 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.816 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.816 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.818 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.818 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.818 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.819 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.819 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.820 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.820 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T11:02:23.820139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.821 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 1670377851 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.822 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 9651641 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.823 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.823 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 2223058984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.824 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 10632793 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.824 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.825 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.826 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.826 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.827 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.828 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.830 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T11:02:23.829608) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.830 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.831 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.832 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.833 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.833 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.834 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.834 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.835 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.835 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.835 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.836 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.837 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.837 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.838 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.839 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.839 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.840 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.840 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.841 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.841 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.842 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.843 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T11:02:23.835132) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.843 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T11:02:23.840187) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.843 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.844 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.844 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.844 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.845 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.845 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.846 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.846 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.846 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.846 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.847 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.847 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T11:02:23.846725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.847 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.847 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.848 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.848 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.848 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.849 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.849 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.849 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T11:02:23.849160) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.850 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.850 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.851 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.851 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.851 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.852 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T11:02:23.851432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.852 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.853 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.853 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.853 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T11:02:23.853403) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.853 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.854 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.854 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.854 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.855 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.855 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.855 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.855 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.856 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.856 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.856 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T11:02:23.855907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.856 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.857 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.857 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.857 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.858 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.858 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.858 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.858 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.858 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T11:02:23.858170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.859 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.859 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.859 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.860 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.860 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.860 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.860 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.861 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.861 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T11:02:23.860394) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.861 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.861 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.862 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.862 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.862 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.862 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.863 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.863 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/memory.usage volume: 48.921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.863 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T11:02:23.862667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.863 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/memory.usage volume: 49.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.863 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.864 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:02:24 compute-0 podman[245781]: 2025-12-09 11:02:24.943038478 +0000 UTC m=+0.100029461 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, managed_by=edpm_ansible, release-0.7.12=, io.buildah.version=1.29.0, name=ubi9, maintainer=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=base rhel9, vcs-type=git, container_name=kepler, com.redhat.component=ubi9-container, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 09 11:02:24 compute-0 podman[245782]: 2025-12-09 11:02:24.949036108 +0000 UTC m=+0.101555641 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 09 11:02:25 compute-0 nova_compute[189493]: 2025-12-09 11:02:25.144 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:27 compute-0 nova_compute[189493]: 2025-12-09 11:02:27.470 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:28 compute-0 podman[245818]: 2025-12-09 11:02:28.961409397 +0000 UTC m=+0.096093505 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 11:02:28 compute-0 podman[245819]: 2025-12-09 11:02:28.982910741 +0000 UTC m=+0.113645834 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 09 11:02:29 compute-0 podman[203687]: time="2025-12-09T11:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:02:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 11:02:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Dec 09 11:02:30 compute-0 nova_compute[189493]: 2025-12-09 11:02:30.148 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:31 compute-0 openstack_network_exporter[205823]: ERROR   11:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:02:31 compute-0 openstack_network_exporter[205823]: ERROR   11:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:02:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:02:31 compute-0 openstack_network_exporter[205823]: ERROR   11:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:02:31 compute-0 openstack_network_exporter[205823]: ERROR   11:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:02:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:02:31 compute-0 openstack_network_exporter[205823]: ERROR   11:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:02:32 compute-0 nova_compute[189493]: 2025-12-09 11:02:32.475 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:33 compute-0 nova_compute[189493]: 2025-12-09 11:02:33.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:02:34 compute-0 nova_compute[189493]: 2025-12-09 11:02:34.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:02:34 compute-0 nova_compute[189493]: 2025-12-09 11:02:34.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:02:34 compute-0 nova_compute[189493]: 2025-12-09 11:02:34.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:02:34 compute-0 podman[245854]: 2025-12-09 11:02:34.973325864 +0000 UTC m=+0.111894487 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Dec 09 11:02:35 compute-0 nova_compute[189493]: 2025-12-09 11:02:35.151 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:36 compute-0 nova_compute[189493]: 2025-12-09 11:02:36.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:02:36 compute-0 podman[245873]: 2025-12-09 11:02:36.992789593 +0000 UTC m=+0.145426242 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 11:02:37 compute-0 nova_compute[189493]: 2025-12-09 11:02:37.477 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:38 compute-0 podman[245900]: 2025-12-09 11:02:38.981972205 +0000 UTC m=+0.114999410 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 11:02:39 compute-0 nova_compute[189493]: 2025-12-09 11:02:39.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:02:39 compute-0 nova_compute[189493]: 2025-12-09 11:02:39.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 11:02:40 compute-0 nova_compute[189493]: 2025-12-09 11:02:40.153 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:40 compute-0 nova_compute[189493]: 2025-12-09 11:02:40.215 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 11:02:40 compute-0 nova_compute[189493]: 2025-12-09 11:02:40.216 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 11:02:40 compute-0 nova_compute[189493]: 2025-12-09 11:02:40.216 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.447 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updating instance_info_cache with network_info: [{"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.468 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.469 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.469 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.470 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.500 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.501 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.501 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.502 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.682 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.786 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.788 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.854 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.855 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.930 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.930 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.010 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.016 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.079 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.080 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.151 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.152 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.216 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.218 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.282 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.289 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.349 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.352 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.452 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.453 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.480 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.540 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.542 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.633 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.994 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.995 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4778MB free_disk=72.13796615600586GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.995 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.996 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.316 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.316 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 32dd7fb0-7003-48cc-b688-4b94946c911f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.317 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.317 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.318 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.417 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.438 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.441 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.442 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.446s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:02:45 compute-0 nova_compute[189493]: 2025-12-09 11:02:45.157 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:45 compute-0 sshd-session[245962]: Invalid user developer from 159.223.8.217 port 39148
Dec 09 11:02:45 compute-0 sshd-session[245962]: Connection closed by invalid user developer 159.223.8.217 port 39148 [preauth]
Dec 09 11:02:46 compute-0 podman[245964]: 2025-12-09 11:02:46.968448133 +0000 UTC m=+0.107338105 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=multipathd)
Dec 09 11:02:47 compute-0 nova_compute[189493]: 2025-12-09 11:02:47.483 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:47 compute-0 nova_compute[189493]: 2025-12-09 11:02:47.814 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:02:47 compute-0 nova_compute[189493]: 2025-12-09 11:02:47.815 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 11:02:50 compute-0 nova_compute[189493]: 2025-12-09 11:02:50.159 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:50 compute-0 podman[245983]: 2025-12-09 11:02:50.951867798 +0000 UTC m=+0.094200664 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 09 11:02:52 compute-0 nova_compute[189493]: 2025-12-09 11:02:52.485 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:55 compute-0 nova_compute[189493]: 2025-12-09 11:02:55.162 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:55 compute-0 podman[246007]: 2025-12-09 11:02:55.954022785 +0000 UTC m=+0.099770284 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 09 11:02:55 compute-0 podman[246006]: 2025-12-09 11:02:55.954611471 +0000 UTC m=+0.103259827 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, vcs-type=git, io.buildah.version=1.29.0, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, version=9.4)
Dec 09 11:02:57 compute-0 nova_compute[189493]: 2025-12-09 11:02:57.490 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:02:59 compute-0 podman[203687]: time="2025-12-09T11:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:02:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 11:02:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4799 "" "Go-http-client/1.1"
Dec 09 11:02:59 compute-0 podman[246045]: 2025-12-09 11:02:59.959293294 +0000 UTC m=+0.093383804 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 11:02:59 compute-0 podman[246046]: 2025-12-09 11:02:59.985904514 +0000 UTC m=+0.115770520 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 09 11:03:00 compute-0 nova_compute[189493]: 2025-12-09 11:03:00.165 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:01 compute-0 openstack_network_exporter[205823]: ERROR   11:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:03:01 compute-0 openstack_network_exporter[205823]: ERROR   11:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:03:01 compute-0 openstack_network_exporter[205823]: ERROR   11:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:03:01 compute-0 openstack_network_exporter[205823]: ERROR   11:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:03:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:03:01 compute-0 openstack_network_exporter[205823]: ERROR   11:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:03:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:03:02 compute-0 nova_compute[189493]: 2025-12-09 11:03:02.492 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:05 compute-0 nova_compute[189493]: 2025-12-09 11:03:05.168 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:06 compute-0 podman[246081]: 2025-12-09 11:03:06.002440863 +0000 UTC m=+0.149393818 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public)
Dec 09 11:03:07 compute-0 nova_compute[189493]: 2025-12-09 11:03:07.496 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:08 compute-0 podman[246101]: 2025-12-09 11:03:08.050721031 +0000 UTC m=+0.191993065 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 09 11:03:09 compute-0 podman[246126]: 2025-12-09 11:03:09.961169491 +0000 UTC m=+0.104879980 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 11:03:10 compute-0 nova_compute[189493]: 2025-12-09 11:03:10.171 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:12 compute-0 nova_compute[189493]: 2025-12-09 11:03:12.498 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:15 compute-0 nova_compute[189493]: 2025-12-09 11:03:15.174 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:15 compute-0 sshd-session[246149]: Invalid user developer from 159.223.8.217 port 53742
Dec 09 11:03:15 compute-0 sshd-session[246149]: Connection closed by invalid user developer 159.223.8.217 port 53742 [preauth]
Dec 09 11:03:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:16.996 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:03:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:16.997 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:03:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:16.998 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:03:17 compute-0 nova_compute[189493]: 2025-12-09 11:03:17.502 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:17 compute-0 podman[246151]: 2025-12-09 11:03:17.975528073 +0000 UTC m=+0.129194088 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 09 11:03:20 compute-0 nova_compute[189493]: 2025-12-09 11:03:20.177 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:21 compute-0 podman[246172]: 2025-12-09 11:03:21.979011416 +0000 UTC m=+0.122037848 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 09 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.505 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.741 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.741 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.743 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.744 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.744 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.746 189497 INFO nova.compute.manager [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Terminating instance
Dec 09 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.747 189497 DEBUG nova.compute.manager [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 09 11:03:22 compute-0 kernel: tapd6164edf-ad (unregistering): left promiscuous mode
Dec 09 11:03:22 compute-0 NetworkManager[56302]: <info>  [1765278202.8036] device (tapd6164edf-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 09 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.825 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:22 compute-0 ovn_controller[97780]: 2025-12-09T11:03:22Z|00054|binding|INFO|Releasing lport d6164edf-adb9-4fa5-9e6d-bae85d8af633 from this chassis (sb_readonly=0)
Dec 09 11:03:22 compute-0 ovn_controller[97780]: 2025-12-09T11:03:22Z|00055|binding|INFO|Setting lport d6164edf-adb9-4fa5-9e6d-bae85d8af633 down in Southbound
Dec 09 11:03:22 compute-0 ovn_controller[97780]: 2025-12-09T11:03:22Z|00056|binding|INFO|Removing iface tapd6164edf-ad ovn-installed in OVS
Dec 09 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.828 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.834 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:9f:5d 192.168.0.98'], port_security=['fa:16:3e:83:9f:5d 192.168.0.98'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5eiooafn7y6w-fel25ona52mn-zi55qxbdeak4-port-7xvtkga34xqd', 'neutron:cidrs': '192.168.0.98/24', 'neutron:device_id': '32dd7fb0-7003-48cc-b688-4b94946c911f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5af7354-5afe-400a-9e13-5500648117d8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5eiooafn7y6w-fel25ona52mn-zi55qxbdeak4-port-7xvtkga34xqd', 'neutron:project_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd86dfae4-cfd5-480d-a50e-0084326b1439', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.244', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61df917c-633f-4b35-857d-39fd859caf35, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], logical_port=d6164edf-adb9-4fa5-9e6d-bae85d8af633) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.837 106644 INFO neutron.agent.ovn.metadata.agent [-] Port d6164edf-adb9-4fa5-9e6d-bae85d8af633 in datapath c5af7354-5afe-400a-9e13-5500648117d8 unbound from our chassis
Dec 09 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.841 106644 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5af7354-5afe-400a-9e13-5500648117d8
Dec 09 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.848 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.857 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[8174dc22-5aea-47d1-973c-bd027bc71035]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:03:22 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec 09 11:03:22 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 35.116s CPU time.
Dec 09 11:03:22 compute-0 systemd-machined[155790]: Machine qemu-3-instance-00000003 terminated.
Dec 09 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.900 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[a67882a3-07a9-44e1-abd7-2f4750400017]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.905 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[933b6741-042f-4a57-a916-180921175c6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.938 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d57e7f-35ae-4120-a21d-d8b519a01ec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.960 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[0998803e-b9e1-4045-b31a-68e5012a6c70]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5af7354-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:0d:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 13, 'rx_bytes': 742, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 13, 'rx_bytes': 742, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396027, 'reachable_time': 16329, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246205, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.975 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.980 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.985 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[d19c0d1d-8e5e-4c39-86f0-e7b607827d6b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396043, 'tstamp': 396043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246207, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396046, 'tstamp': 396046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246207, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.987 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5af7354-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.990 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.995 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.997 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5af7354-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.997 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 09 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.998 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5af7354-50, col_values=(('external_ids', {'iface-id': '3eb47070-bc26-4827-a5a8-68152f05129c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.998 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.046 189497 INFO nova.virt.libvirt.driver [-] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Instance destroyed successfully.
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.048 189497 DEBUG nova.objects.instance [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'resources' on Instance uuid 32dd7fb0-7003-48cc-b688-4b94946c911f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.064 189497 DEBUG nova.virt.libvirt.vif [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-09T10:55:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y',id=3,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-09T10:55:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-8nh5c9bf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-09T10:55:41Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODI4MTU5Njc0NjczMDMwNjUyND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Dec 09 11:03:23 compute-0 nova_compute[189493]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODI4MTU5Njc0NjczMDMwNjUyND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0tLQo=',user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=32dd7fb0-7003-48cc-b688-4b94946c911f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.064 189497 DEBUG nova.network.os_vif_util [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.067 189497 DEBUG nova.network.os_vif_util [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:83:9f:5d,bridge_name='br-int',has_traffic_filtering=True,id=d6164edf-adb9-4fa5-9e6d-bae85d8af633,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd6164edf-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.067 189497 DEBUG os_vif [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:9f:5d,bridge_name='br-int',has_traffic_filtering=True,id=d6164edf-adb9-4fa5-9e6d-bae85d8af633,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd6164edf-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.070 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.071 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6164edf-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.074 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:23 compute-0 rsyslogd[236818]: message too long (8192) with configured size 8096, begin of message is: 2025-12-09 11:03:23.064 189497 DEBUG nova.virt.libvirt.vif [None req-79c5c013-c9 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.076 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.084 189497 INFO os_vif [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:9f:5d,bridge_name='br-int',has_traffic_filtering=True,id=d6164edf-adb9-4fa5-9e6d-bae85d8af633,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd6164edf-ad')
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.084 189497 INFO nova.virt.libvirt.driver [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Deleting instance files /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f_del
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.085 189497 INFO nova.virt.libvirt.driver [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Deletion of /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f_del complete
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.162 189497 INFO nova.compute.manager [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Took 0.41 seconds to destroy the instance on the hypervisor.
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.163 189497 DEBUG oslo.service.loopingcall [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.164 189497 DEBUG nova.compute.manager [-] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.164 189497 DEBUG nova.network.neutron [-] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.491 189497 DEBUG nova.compute.manager [req-a6918585-b4aa-401c-b602-c7296ada86c7 req-1f1e4c52-2e1b-4d05-95b9-64951af3dea4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received event network-vif-unplugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.492 189497 DEBUG oslo_concurrency.lockutils [req-a6918585-b4aa-401c-b602-c7296ada86c7 req-1f1e4c52-2e1b-4d05-95b9-64951af3dea4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.492 189497 DEBUG oslo_concurrency.lockutils [req-a6918585-b4aa-401c-b602-c7296ada86c7 req-1f1e4c52-2e1b-4d05-95b9-64951af3dea4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.492 189497 DEBUG oslo_concurrency.lockutils [req-a6918585-b4aa-401c-b602-c7296ada86c7 req-1f1e4c52-2e1b-4d05-95b9-64951af3dea4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.493 189497 DEBUG nova.compute.manager [req-a6918585-b4aa-401c-b602-c7296ada86c7 req-1f1e4c52-2e1b-4d05-95b9-64951af3dea4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] No waiting events found dispatching network-vif-unplugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.493 189497 DEBUG nova.compute.manager [req-a6918585-b4aa-401c-b602-c7296ada86c7 req-1f1e4c52-2e1b-4d05-95b9-64951af3dea4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received event network-vif-unplugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 09 11:03:23 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:23.651 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 11:03:23 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:23.652 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 09 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.654 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.298 189497 DEBUG nova.compute.manager [req-d9e94483-72b1-4f92-bbf7-4ee739fd639d req-6e7a57a1-d226-4fe2-a9b1-42037a7f96c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received event network-changed-d6164edf-adb9-4fa5-9e6d-bae85d8af633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.299 189497 DEBUG nova.compute.manager [req-d9e94483-72b1-4f92-bbf7-4ee739fd639d req-6e7a57a1-d226-4fe2-a9b1-42037a7f96c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Refreshing instance network info cache due to event network-changed-d6164edf-adb9-4fa5-9e6d-bae85d8af633. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 09 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.299 189497 DEBUG oslo_concurrency.lockutils [req-d9e94483-72b1-4f92-bbf7-4ee739fd639d req-6e7a57a1-d226-4fe2-a9b1-42037a7f96c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.300 189497 DEBUG oslo_concurrency.lockutils [req-d9e94483-72b1-4f92-bbf7-4ee739fd639d req-6e7a57a1-d226-4fe2-a9b1-42037a7f96c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquired lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.300 189497 DEBUG nova.network.neutron [req-d9e94483-72b1-4f92-bbf7-4ee739fd639d req-6e7a57a1-d226-4fe2-a9b1-42037a7f96c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Refreshing network info cache for port d6164edf-adb9-4fa5-9e6d-bae85d8af633 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 09 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.579 189497 DEBUG nova.network.neutron [-] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.602 189497 INFO nova.compute.manager [-] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Took 1.44 seconds to deallocate network for instance.
Dec 09 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.607 189497 INFO nova.network.neutron [req-d9e94483-72b1-4f92-bbf7-4ee739fd639d req-6e7a57a1-d226-4fe2-a9b1-42037a7f96c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Port d6164edf-adb9-4fa5-9e6d-bae85d8af633 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Dec 09 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.607 189497 DEBUG nova.network.neutron [req-d9e94483-72b1-4f92-bbf7-4ee739fd639d req-6e7a57a1-d226-4fe2-a9b1-42037a7f96c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.640 189497 DEBUG oslo_concurrency.lockutils [req-d9e94483-72b1-4f92-bbf7-4ee739fd639d req-6e7a57a1-d226-4fe2-a9b1-42037a7f96c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Releasing lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.666 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.666 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.811 189497 DEBUG nova.compute.provider_tree [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.825 189497 DEBUG nova.scheduler.client.report [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.849 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.871 189497 INFO nova.scheduler.client.report [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Deleted allocations for instance 32dd7fb0-7003-48cc-b688-4b94946c911f
Dec 09 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.929 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:03:25 compute-0 nova_compute[189493]: 2025-12-09 11:03:25.586 189497 DEBUG nova.compute.manager [req-79e11122-54cb-4766-997a-ed889d2827c1 req-d661a2b6-6bc6-445f-8739-03a228b018f0 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received event network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 11:03:25 compute-0 nova_compute[189493]: 2025-12-09 11:03:25.587 189497 DEBUG oslo_concurrency.lockutils [req-79e11122-54cb-4766-997a-ed889d2827c1 req-d661a2b6-6bc6-445f-8739-03a228b018f0 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:03:25 compute-0 nova_compute[189493]: 2025-12-09 11:03:25.587 189497 DEBUG oslo_concurrency.lockutils [req-79e11122-54cb-4766-997a-ed889d2827c1 req-d661a2b6-6bc6-445f-8739-03a228b018f0 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:03:25 compute-0 nova_compute[189493]: 2025-12-09 11:03:25.587 189497 DEBUG oslo_concurrency.lockutils [req-79e11122-54cb-4766-997a-ed889d2827c1 req-d661a2b6-6bc6-445f-8739-03a228b018f0 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:03:25 compute-0 nova_compute[189493]: 2025-12-09 11:03:25.588 189497 DEBUG nova.compute.manager [req-79e11122-54cb-4766-997a-ed889d2827c1 req-d661a2b6-6bc6-445f-8739-03a228b018f0 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] No waiting events found dispatching network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 09 11:03:25 compute-0 nova_compute[189493]: 2025-12-09 11:03:25.588 189497 WARNING nova.compute.manager [req-79e11122-54cb-4766-997a-ed889d2827c1 req-d661a2b6-6bc6-445f-8739-03a228b018f0 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received unexpected event network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 for instance with vm_state deleted and task_state None.
Dec 09 11:03:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:25.655 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:03:26 compute-0 podman[246229]: 2025-12-09 11:03:26.974017817 +0000 UTC m=+0.117158518 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=edpm, managed_by=edpm_ansible)
Dec 09 11:03:26 compute-0 podman[246228]: 2025-12-09 11:03:26.981161357 +0000 UTC m=+0.125408887 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, vcs-type=git, version=9.4, config_id=edpm, release-0.7.12=, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Dec 09 11:03:27 compute-0 nova_compute[189493]: 2025-12-09 11:03:27.508 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:28 compute-0 nova_compute[189493]: 2025-12-09 11:03:28.074 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:29 compute-0 podman[203687]: time="2025-12-09T11:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:03:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 11:03:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec 09 11:03:30 compute-0 podman[246267]: 2025-12-09 11:03:30.91875371 +0000 UTC m=+0.073060991 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 09 11:03:30 compute-0 podman[246268]: 2025-12-09 11:03:30.96335281 +0000 UTC m=+0.101407777 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 09 11:03:31 compute-0 openstack_network_exporter[205823]: ERROR   11:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:03:31 compute-0 openstack_network_exporter[205823]: ERROR   11:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:03:31 compute-0 openstack_network_exporter[205823]: ERROR   11:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:03:31 compute-0 openstack_network_exporter[205823]: ERROR   11:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:03:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:03:31 compute-0 openstack_network_exporter[205823]: ERROR   11:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:03:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:03:32 compute-0 nova_compute[189493]: 2025-12-09 11:03:32.511 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:33 compute-0 nova_compute[189493]: 2025-12-09 11:03:33.077 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:33 compute-0 nova_compute[189493]: 2025-12-09 11:03:33.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:03:34 compute-0 nova_compute[189493]: 2025-12-09 11:03:34.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:03:35 compute-0 nova_compute[189493]: 2025-12-09 11:03:35.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:03:36 compute-0 nova_compute[189493]: 2025-12-09 11:03:36.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:03:36 compute-0 nova_compute[189493]: 2025-12-09 11:03:36.838 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:03:36 compute-0 nova_compute[189493]: 2025-12-09 11:03:36.865 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:03:36 compute-0 podman[246306]: 2025-12-09 11:03:36.965223903 +0000 UTC m=+0.117808764 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, architecture=x86_64, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter)
Dec 09 11:03:37 compute-0 nova_compute[189493]: 2025-12-09 11:03:37.515 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:38 compute-0 nova_compute[189493]: 2025-12-09 11:03:38.043 189497 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765278203.0413747, 32dd7fb0-7003-48cc-b688-4b94946c911f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 11:03:38 compute-0 nova_compute[189493]: 2025-12-09 11:03:38.044 189497 INFO nova.compute.manager [-] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] VM Stopped (Lifecycle Event)
Dec 09 11:03:38 compute-0 nova_compute[189493]: 2025-12-09 11:03:38.070 189497 DEBUG nova.compute.manager [None req-c59678a8-1c02-47ce-8444-ac31349f19b0 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 11:03:38 compute-0 nova_compute[189493]: 2025-12-09 11:03:38.081 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:39 compute-0 podman[246327]: 2025-12-09 11:03:39.044437217 +0000 UTC m=+0.189786806 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 09 11:03:40 compute-0 nova_compute[189493]: 2025-12-09 11:03:40.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:03:40 compute-0 nova_compute[189493]: 2025-12-09 11:03:40.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 11:03:40 compute-0 nova_compute[189493]: 2025-12-09 11:03:40.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 11:03:40 compute-0 podman[246353]: 2025-12-09 11:03:40.96138925 +0000 UTC m=+0.108706392 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 11:03:41 compute-0 nova_compute[189493]: 2025-12-09 11:03:41.196 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 11:03:41 compute-0 nova_compute[189493]: 2025-12-09 11:03:41.197 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 11:03:41 compute-0 nova_compute[189493]: 2025-12-09 11:03:41.208 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 11:03:41 compute-0 nova_compute[189493]: 2025-12-09 11:03:41.209 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 11:03:42 compute-0 nova_compute[189493]: 2025-12-09 11:03:42.487 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 11:03:42 compute-0 nova_compute[189493]: 2025-12-09 11:03:42.506 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 11:03:42 compute-0 nova_compute[189493]: 2025-12-09 11:03:42.507 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 11:03:42 compute-0 nova_compute[189493]: 2025-12-09 11:03:42.508 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:03:42 compute-0 nova_compute[189493]: 2025-12-09 11:03:42.518 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:42 compute-0 nova_compute[189493]: 2025-12-09 11:03:42.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.069 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.070 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.070 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.070 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.082 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.153 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.234 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.235 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.292 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.293 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.353 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.354 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.433 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.441 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.500 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.502 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.595 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.597 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.690 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.691 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.765 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.135 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.136 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4939MB free_disk=72.16051483154297GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.136 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.136 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.222 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.222 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.222 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.222 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.280 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.293 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.314 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.314 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:03:44 compute-0 sshd-session[246401]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Dec 09 11:03:45 compute-0 sshd-session[246403]: Invalid user developer from 159.223.8.217 port 34826
Dec 09 11:03:45 compute-0 sshd-session[246403]: Connection closed by invalid user developer 159.223.8.217 port 34826 [preauth]
Dec 09 11:03:47 compute-0 nova_compute[189493]: 2025-12-09 11:03:47.520 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:48 compute-0 nova_compute[189493]: 2025-12-09 11:03:48.085 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:48 compute-0 podman[246405]: 2025-12-09 11:03:48.932315403 +0000 UTC m=+0.084274190 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible)
Dec 09 11:03:49 compute-0 nova_compute[189493]: 2025-12-09 11:03:49.315 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:03:49 compute-0 nova_compute[189493]: 2025-12-09 11:03:49.315 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 11:03:52 compute-0 nova_compute[189493]: 2025-12-09 11:03:52.524 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:52 compute-0 podman[246427]: 2025-12-09 11:03:52.978505574 +0000 UTC m=+0.115256148 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 11:03:53 compute-0 nova_compute[189493]: 2025-12-09 11:03:53.087 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:54 compute-0 sshd-session[246401]: Connection closed by authenticating user root 139.19.117.197 port 40912 [preauth]
Dec 09 11:03:56 compute-0 ovn_controller[97780]: 2025-12-09T11:03:56Z|00057|memory_trim|INFO|Detected inactivity (last active 30019 ms ago): trimming memory
Dec 09 11:03:57 compute-0 nova_compute[189493]: 2025-12-09 11:03:57.528 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:57 compute-0 podman[246452]: 2025-12-09 11:03:57.956029349 +0000 UTC m=+0.097079981 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 11:03:57 compute-0 podman[246451]: 2025-12-09 11:03:57.961199007 +0000 UTC m=+0.113164600 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., container_name=kepler, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 09 11:03:58 compute-0 nova_compute[189493]: 2025-12-09 11:03:58.090 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:03:59 compute-0 sshd-session[246487]: Received disconnect from 193.46.255.103 port 21764:11:  [preauth]
Dec 09 11:03:59 compute-0 sshd-session[246487]: Disconnected from authenticating user root 193.46.255.103 port 21764 [preauth]
Dec 09 11:03:59 compute-0 podman[203687]: time="2025-12-09T11:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:03:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 11:03:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Dec 09 11:04:01 compute-0 openstack_network_exporter[205823]: ERROR   11:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:04:01 compute-0 openstack_network_exporter[205823]: ERROR   11:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:04:01 compute-0 openstack_network_exporter[205823]: ERROR   11:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:04:01 compute-0 openstack_network_exporter[205823]: ERROR   11:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:04:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:04:01 compute-0 openstack_network_exporter[205823]: ERROR   11:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:04:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:04:01 compute-0 sshd-session[246489]: Accepted publickey for zuul from 38.102.83.145 port 57116 ssh2: RSA SHA256:OoA6ymXz1bGWu/N8aYc4tZBvI5ffrgdXcLpAm+SU/Q8
Dec 09 11:04:01 compute-0 systemd-logind[806]: New session 30 of user zuul.
Dec 09 11:04:01 compute-0 systemd[1]: Started Session 30 of User zuul.
Dec 09 11:04:01 compute-0 sshd-session[246489]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:04:01 compute-0 podman[246493]: 2025-12-09 11:04:01.620389142 +0000 UTC m=+0.088703867 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 09 11:04:01 compute-0 podman[246491]: 2025-12-09 11:04:01.624830641 +0000 UTC m=+0.087254509 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 09 11:04:02 compute-0 sudo[246700]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvxfpcnihshxcwgjoldyhtxhxcxpncpz ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765278241.700982-59526-39635581251020/AnsiballZ_command.py'
Dec 09 11:04:02 compute-0 sudo[246700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:04:02 compute-0 nova_compute[189493]: 2025-12-09 11:04:02.529 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:02 compute-0 python3[246702]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:04:02 compute-0 sudo[246700]: pam_unix(sudo:session): session closed for user root
Dec 09 11:04:03 compute-0 nova_compute[189493]: 2025-12-09 11:04:03.093 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:07 compute-0 nova_compute[189493]: 2025-12-09 11:04:07.533 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:07 compute-0 podman[246742]: 2025-12-09 11:04:07.994627666 +0000 UTC m=+0.131598213 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Dec 09 11:04:08 compute-0 nova_compute[189493]: 2025-12-09 11:04:08.097 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:10 compute-0 podman[246762]: 2025-12-09 11:04:10.013176348 +0000 UTC m=+0.154051892 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec 09 11:04:11 compute-0 podman[246787]: 2025-12-09 11:04:11.977262038 +0000 UTC m=+0.112387940 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 11:04:12 compute-0 nova_compute[189493]: 2025-12-09 11:04:12.536 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:13 compute-0 nova_compute[189493]: 2025-12-09 11:04:13.100 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:13 compute-0 sshd-session[246811]: Invalid user developer from 159.223.8.217 port 58022
Dec 09 11:04:13 compute-0 sshd-session[246811]: Connection closed by invalid user developer 159.223.8.217 port 58022 [preauth]
Dec 09 11:04:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:04:16.997 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:04:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:04:16.999 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:04:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:04:17.000 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:04:17 compute-0 nova_compute[189493]: 2025-12-09 11:04:17.539 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:18 compute-0 nova_compute[189493]: 2025-12-09 11:04:18.104 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:20 compute-0 podman[246814]: 2025-12-09 11:04:20.018570854 +0000 UTC m=+0.156539788 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 11:04:20 compute-0 nova_compute[189493]: 2025-12-09 11:04:20.401 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:04:20 compute-0 nova_compute[189493]: 2025-12-09 11:04:20.403 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:04:20 compute-0 nova_compute[189493]: 2025-12-09 11:04:20.427 189497 DEBUG nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 09 11:04:20 compute-0 nova_compute[189493]: 2025-12-09 11:04:20.533 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:04:20 compute-0 nova_compute[189493]: 2025-12-09 11:04:20.534 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:04:20 compute-0 nova_compute[189493]: 2025-12-09 11:04:20.551 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 09 11:04:20 compute-0 nova_compute[189493]: 2025-12-09 11:04:20.552 189497 INFO nova.compute.claims [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Claim successful on node compute-0.ctlplane.example.com
Dec 09 11:04:20 compute-0 nova_compute[189493]: 2025-12-09 11:04:20.731 189497 DEBUG nova.compute.provider_tree [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.121 189497 DEBUG nova.scheduler.client.report [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.205 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.207 189497 DEBUG nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 09 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.272 189497 DEBUG nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 09 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.298 189497 INFO nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 09 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.343 189497 DEBUG nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 09 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.425 189497 DEBUG nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 09 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.428 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 09 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.429 189497 INFO nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Creating image(s)
Dec 09 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.432 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.433 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.435 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.437 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "5bb7c4482f5baf067a2d223774ac0caa815bf93f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.439 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "5bb7c4482f5baf067a2d223774ac0caa815bf93f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:04:22 compute-0 nova_compute[189493]: 2025-12-09 11:04:22.567 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:22 compute-0 nova_compute[189493]: 2025-12-09 11:04:22.817 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:22 compute-0 nova_compute[189493]: 2025-12-09 11:04:22.920 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f.part --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:22 compute-0 nova_compute[189493]: 2025-12-09 11:04:22.923 189497 DEBUG nova.virt.images [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] 9d62c0b6-ea01-495f-87e9-b5532d7a4e36 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 09 11:04:22 compute-0 nova_compute[189493]: 2025-12-09 11:04:22.925 189497 DEBUG nova.privsep.utils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 09 11:04:22 compute-0 nova_compute[189493]: 2025-12-09 11:04:22.926 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f.part /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.106 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.185 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f.part /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f.converted" returned: 0 in 0.259s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.191 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.283 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f.converted --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.285 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "5bb7c4482f5baf067a2d223774ac0caa815bf93f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.296 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.298 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.311 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7b43ca09-ed65-4465-9fcc-95caa6dc9a88', 'name': 'vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.315 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.317 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.318 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.318 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.318 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.318 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.320 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T11:04:23.318566) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.327 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.335 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.337 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.337 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T11:04:23.338396) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.379 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.380 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.380 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.399 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.401 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "5bb7c4482f5baf067a2d223774ac0caa815bf93f" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.402 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "5bb7c4482f5baf067a2d223774ac0caa815bf93f" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.417 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.417 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.418 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.418 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.419 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.419 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.419 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.419 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.419 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.420 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.420 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T11:04:23.419855) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.420 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.421 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.421 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.421 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.421 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.422 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.422 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.422 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.422 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.423 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.423 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.423 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.424 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.424 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T11:04:23.422217) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.424 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.424 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.425 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T11:04:23.424596) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.442 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.518 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.518 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.518 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.538 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.539 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f,backing_fmt=raw /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.586 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f,backing_fmt=raw /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.588 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "5bb7c4482f5baf067a2d223774ac0caa815bf93f" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.589 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.610 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.611 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.612 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.613 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.614 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.614 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.615 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.615 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.616 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.617 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T11:04:23.616365) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.617 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.619 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.619 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.620 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.621 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.621 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.622 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.622 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T11:04:23.622041) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.622 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 492966519 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.623 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 88653492 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.623 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 59040938 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.624 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.624 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.624 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.625 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.626 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.626 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.626 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.626 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.627 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.627 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.628 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.628 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.628 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.629 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.629 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.630 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.630 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.630 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.630 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.630 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.630 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.630 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.630 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.631 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.631 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.631 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.632 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.632 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.632 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.631 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T11:04:23.626920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.632 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T11:04:23.630256) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.632 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T11:04:23.632192) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.663 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/cpu volume: 38010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.678 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.680 189497 DEBUG nova.virt.disk.api [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Checking if we can resize image /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.681 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.689 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 47020000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.690 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.690 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.691 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.691 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.691 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.692 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.692 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T11:04:23.691691) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.693 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.693 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.694 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.694 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.694 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.695 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.696 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.696 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.696 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.696 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.696 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.697 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T11:04:23.697028) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.697 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.698 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.698 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.698 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.699 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.699 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.700 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.701 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.701 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.701 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T11:04:23.701845) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.702 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.702 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 2223058984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.703 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 10632793 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.703 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.704 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.704 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.705 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.705 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.706 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.706 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.706 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.707 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T11:04:23.707054) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.707 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.708 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.708 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.709 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.709 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.710 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T11:04:23.710116) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.710 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.711 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.711 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.711 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.712 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.712 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T11:04:23.712356) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.712 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.713 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.713 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.713 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.713 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.714 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.714 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.714 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.715 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.715 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.715 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.715 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.716 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T11:04:23.715573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.716 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.716 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.716 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.717 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.717 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T11:04:23.717523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.719 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T11:04:23.719499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.720 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.721 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.721 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.722 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T11:04:23.721522) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.721 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.722 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.722 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.722 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.723 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.723 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.723 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T11:04:23.723476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.724 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.724 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.724 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.725 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.725 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.726 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T11:04:23.725514) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.725 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.726 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.726 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.726 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.727 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.727 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.727 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.727 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T11:04:23.727448) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.728 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.728 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.728 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.729 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.729 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.729 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.729 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.729 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.730 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T11:04:23.729458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.730 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.730 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.730 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.731 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.788 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.790 189497 DEBUG nova.virt.disk.api [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Cannot resize image /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.791 189497 DEBUG nova.objects.instance [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'migration_context' on Instance uuid 44ac2ce0-9161-4b3c-baf9-be45585c5f0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.813 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.814 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.815 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.848 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.915 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.916 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.917 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.932 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:23 compute-0 podman[246864]: 2025-12-09 11:04:23.953305722 +0000 UTC m=+0.088900433 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.013 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.014 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.062 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.eph0 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.064 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.065 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.128 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.130 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.131 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Ensure instance console log exists: /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.131 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.132 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.133 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.137 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T11:04:07Z,direct_url=<?>,disk_format='qcow2',id=9d62c0b6-ea01-495f-87e9-b5532d7a4e36,min_disk=0,min_ram=0,name='fvt_testing_image',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T11:04:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'size': 0, 'image_id': '9d62c0b6-ea01-495f-87e9-b5532d7a4e36'}], 'ephemerals': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'device_type': 'disk', 'guest_format': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.149 189497 WARNING nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.161 189497 DEBUG nova.virt.libvirt.host [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.162 189497 DEBUG nova.virt.libvirt.host [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.171 189497 DEBUG nova.virt.libvirt.host [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.172 189497 DEBUG nova.virt.libvirt.host [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.173 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.173 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-09T11:04:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='922e7637-0894-48fe-9b2a-1166c1701507',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T11:04:07Z,direct_url=<?>,disk_format='qcow2',id=9d62c0b6-ea01-495f-87e9-b5532d7a4e36,min_disk=0,min_ram=0,name='fvt_testing_image',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T11:04:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.174 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.175 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.175 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.175 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.176 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.176 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.177 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.177 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.177 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.178 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.185 189497 DEBUG nova.objects.instance [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 44ac2ce0-9161-4b3c-baf9-be45585c5f0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.210 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] End _get_guest_xml xml=<domain type="kvm">
Dec 09 11:04:24 compute-0 nova_compute[189493]:   <uuid>44ac2ce0-9161-4b3c-baf9-be45585c5f0e</uuid>
Dec 09 11:04:24 compute-0 nova_compute[189493]:   <name>instance-00000005</name>
Dec 09 11:04:24 compute-0 nova_compute[189493]:   <memory>524288</memory>
Dec 09 11:04:24 compute-0 nova_compute[189493]:   <vcpu>1</vcpu>
Dec 09 11:04:24 compute-0 nova_compute[189493]:   <metadata>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <nova:name>fvt_testing_server</nova:name>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <nova:creationTime>2025-12-09 11:04:24</nova:creationTime>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <nova:flavor name="fvt_testing_flavor">
Dec 09 11:04:24 compute-0 nova_compute[189493]:         <nova:memory>512</nova:memory>
Dec 09 11:04:24 compute-0 nova_compute[189493]:         <nova:disk>1</nova:disk>
Dec 09 11:04:24 compute-0 nova_compute[189493]:         <nova:swap>0</nova:swap>
Dec 09 11:04:24 compute-0 nova_compute[189493]:         <nova:ephemeral>1</nova:ephemeral>
Dec 09 11:04:24 compute-0 nova_compute[189493]:         <nova:vcpus>1</nova:vcpus>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       </nova:flavor>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <nova:owner>
Dec 09 11:04:24 compute-0 nova_compute[189493]:         <nova:user uuid="e6d3a937c2a74eb0816d9f63820935e0">admin</nova:user>
Dec 09 11:04:24 compute-0 nova_compute[189493]:         <nova:project uuid="736bbfddbeea47e3ac9d863ba120b8f2">admin</nova:project>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       </nova:owner>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <nova:root type="image" uuid="9d62c0b6-ea01-495f-87e9-b5532d7a4e36"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <nova:ports/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     </nova:instance>
Dec 09 11:04:24 compute-0 nova_compute[189493]:   </metadata>
Dec 09 11:04:24 compute-0 nova_compute[189493]:   <sysinfo type="smbios">
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <system>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <entry name="manufacturer">RDO</entry>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <entry name="product">OpenStack Compute</entry>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <entry name="serial">44ac2ce0-9161-4b3c-baf9-be45585c5f0e</entry>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <entry name="uuid">44ac2ce0-9161-4b3c-baf9-be45585c5f0e</entry>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <entry name="family">Virtual Machine</entry>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     </system>
Dec 09 11:04:24 compute-0 nova_compute[189493]:   </sysinfo>
Dec 09 11:04:24 compute-0 nova_compute[189493]:   <os>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <boot dev="hd"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <smbios mode="sysinfo"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:   </os>
Dec 09 11:04:24 compute-0 nova_compute[189493]:   <features>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <acpi/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <apic/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <vmcoreinfo/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:   </features>
Dec 09 11:04:24 compute-0 nova_compute[189493]:   <clock offset="utc">
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <timer name="pit" tickpolicy="delay"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <timer name="hpet" present="no"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:   </clock>
Dec 09 11:04:24 compute-0 nova_compute[189493]:   <cpu mode="host-model" match="exact">
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <topology sockets="1" cores="1" threads="1"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:   </cpu>
Dec 09 11:04:24 compute-0 nova_compute[189493]:   <devices>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <disk type="file" device="disk">
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <source file="/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <target dev="vda" bus="virtio"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     </disk>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <disk type="file" device="disk">
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <driver name="qemu" type="qcow2" cache="none"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <source file="/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.eph0"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <target dev="vdb" bus="virtio"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     </disk>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <disk type="file" device="cdrom">
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <driver name="qemu" type="raw" cache="none"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <source file="/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.config"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <target dev="sda" bus="sata"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     </disk>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <serial type="pty">
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <log file="/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/console.log" append="off"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     </serial>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <video>
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <model type="virtio"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     </video>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <input type="tablet" bus="usb"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <rng model="virtio">
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <backend model="random">/dev/urandom</backend>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     </rng>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="pci" model="pcie-root-port"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <controller type="usb" index="0"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     <memballoon model="virtio">
Dec 09 11:04:24 compute-0 nova_compute[189493]:       <stats period="10"/>
Dec 09 11:04:24 compute-0 nova_compute[189493]:     </memballoon>
Dec 09 11:04:24 compute-0 nova_compute[189493]:   </devices>
Dec 09 11:04:24 compute-0 nova_compute[189493]: </domain>
Dec 09 11:04:24 compute-0 nova_compute[189493]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.292 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.294 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.294 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.296 189497 INFO nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Using config drive
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.736 189497 INFO nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Creating config drive at /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.config
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.747 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf9s0tggx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.876 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf9s0tggx" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:24 compute-0 systemd-machined[155790]: New machine qemu-5-instance-00000005.
Dec 09 11:04:25 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Dec 09 11:04:25 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 09 11:04:25 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.148 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765278266.1475823, 44ac2ce0-9161-4b3c-baf9-be45585c5f0e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.150 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] VM Resumed (Lifecycle Event)
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.155 189497 DEBUG nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.155 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.164 189497 INFO nova.virt.libvirt.driver [-] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Instance spawned successfully.
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.165 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.171 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.188 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.198 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.198 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.199 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.200 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.200 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.201 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.213 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.214 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765278266.1540236, 44ac2ce0-9161-4b3c-baf9-be45585c5f0e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.215 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] VM Started (Lifecycle Event)
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.242 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.254 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.270 189497 INFO nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Took 4.84 seconds to spawn the instance on the hypervisor.
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.271 189497 DEBUG nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.279 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.360 189497 INFO nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Took 5.87 seconds to build instance.
Dec 09 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.390 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:04:27 compute-0 nova_compute[189493]: 2025-12-09 11:04:27.548 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:27 compute-0 sshd-session[246946]: Connection closed by authenticating user root 196.251.100.74 port 37044 [preauth]
Dec 09 11:04:28 compute-0 nova_compute[189493]: 2025-12-09 11:04:28.112 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:28 compute-0 virtproxyd[246920]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 09 11:04:28 compute-0 virtproxyd[246920]: hostname: compute-0
Dec 09 11:04:28 compute-0 virtproxyd[246920]: End of file while reading data: Input/output error
Dec 09 11:04:28 compute-0 podman[246948]: 2025-12-09 11:04:28.961554236 +0000 UTC m=+0.096906976 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, name=ubi9, io.buildah.version=1.29.0, architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible)
Dec 09 11:04:28 compute-0 podman[246949]: 2025-12-09 11:04:28.984994152 +0000 UTC m=+0.119890970 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2)
Dec 09 11:04:29 compute-0 podman[203687]: time="2025-12-09T11:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:04:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 11:04:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec 09 11:04:31 compute-0 openstack_network_exporter[205823]: ERROR   11:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:04:31 compute-0 openstack_network_exporter[205823]: ERROR   11:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:04:31 compute-0 openstack_network_exporter[205823]: ERROR   11:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:04:31 compute-0 openstack_network_exporter[205823]: ERROR   11:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:04:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:04:31 compute-0 openstack_network_exporter[205823]: ERROR   11:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:04:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:04:31 compute-0 podman[246986]: 2025-12-09 11:04:31.94128713 +0000 UTC m=+0.088652697 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec 09 11:04:31 compute-0 podman[246985]: 2025-12-09 11:04:31.952308374 +0000 UTC m=+0.101821167 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 09 11:04:32 compute-0 nova_compute[189493]: 2025-12-09 11:04:32.551 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:33 compute-0 nova_compute[189493]: 2025-12-09 11:04:33.115 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:34 compute-0 nova_compute[189493]: 2025-12-09 11:04:34.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:04:35 compute-0 nova_compute[189493]: 2025-12-09 11:04:35.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:04:36 compute-0 nova_compute[189493]: 2025-12-09 11:04:36.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:04:36 compute-0 nova_compute[189493]: 2025-12-09 11:04:36.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:04:37 compute-0 nova_compute[189493]: 2025-12-09 11:04:37.553 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:38 compute-0 nova_compute[189493]: 2025-12-09 11:04:38.117 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:38 compute-0 nova_compute[189493]: 2025-12-09 11:04:38.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:04:38 compute-0 podman[247025]: 2025-12-09 11:04:38.974026548 +0000 UTC m=+0.110872269 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, distribution-scope=public, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9)
Dec 09 11:04:40 compute-0 nova_compute[189493]: 2025-12-09 11:04:40.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:04:40 compute-0 nova_compute[189493]: 2025-12-09 11:04:40.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 11:04:41 compute-0 podman[247045]: 2025-12-09 11:04:41.08462416 +0000 UTC m=+0.217027023 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Dec 09 11:04:41 compute-0 nova_compute[189493]: 2025-12-09 11:04:41.831 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 11:04:41 compute-0 nova_compute[189493]: 2025-12-09 11:04:41.831 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 11:04:41 compute-0 nova_compute[189493]: 2025-12-09 11:04:41.832 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 11:04:42 compute-0 sshd-session[247068]: Invalid user docker from 159.223.8.217 port 50688
Dec 09 11:04:42 compute-0 sshd-session[247068]: Connection closed by invalid user docker 159.223.8.217 port 50688 [preauth]
Dec 09 11:04:42 compute-0 nova_compute[189493]: 2025-12-09 11:04:42.555 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:42 compute-0 podman[247070]: 2025-12-09 11:04:42.578004629 +0000 UTC m=+0.115126983 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 11:04:43 compute-0 nova_compute[189493]: 2025-12-09 11:04:43.119 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.037 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.039 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.040 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.040 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.041 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.043 189497 INFO nova.compute.manager [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Terminating instance
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.046 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "refresh_cache-44ac2ce0-9161-4b3c-baf9-be45585c5f0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.047 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquired lock "refresh_cache-44ac2ce0-9161-4b3c-baf9-be45585c5f0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.047 189497 DEBUG nova.network.neutron [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.250 189497 DEBUG nova.network.neutron [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.389 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updating instance_info_cache with network_info: [{"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.403 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.404 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.404 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.405 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.430 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.431 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.431 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.431 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.543 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.652 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.654 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.736 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.737 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.800 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.802 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.905 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.eph0 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.914 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.996 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.997 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.087 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.089 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.160 189497 DEBUG nova.network.neutron [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.175 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Releasing lock "refresh_cache-44ac2ce0-9161-4b3c-baf9-be45585c5f0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.176 189497 DEBUG nova.compute.manager [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.188 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.189 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:45 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec 09 11:04:45 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 20.624s CPU time.
Dec 09 11:04:45 compute-0 systemd-machined[155790]: Machine qemu-5-instance-00000005 terminated.
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.273 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.282 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.368 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.369 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.440 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.442 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.472 189497 INFO nova.virt.libvirt.driver [-] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Instance destroyed successfully.
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.474 189497 DEBUG nova.objects.instance [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'resources' on Instance uuid 44ac2ce0-9161-4b3c-baf9-be45585c5f0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.499 189497 INFO nova.virt.libvirt.driver [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Deleting instance files /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e_del
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.500 189497 INFO nova.virt.libvirt.driver [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Deletion of /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e_del complete
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.510 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.511 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.577 189497 INFO nova.compute.manager [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Took 0.40 seconds to destroy the instance on the hypervisor.
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.579 189497 DEBUG oslo.service.loopingcall [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.580 189497 DEBUG nova.compute.manager [-] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.580 189497 DEBUG nova.network.neutron [-] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.613 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.813 189497 DEBUG nova.network.neutron [-] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.835 189497 DEBUG nova.network.neutron [-] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.860 189497 INFO nova.compute.manager [-] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Took 0.28 seconds to deallocate network for instance.
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.920 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.922 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.053 189497 DEBUG nova.scheduler.client.report [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Refreshing inventories for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.121 189497 DEBUG nova.scheduler.client.report [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Updating ProviderTree inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.122 189497 DEBUG nova.compute.provider_tree [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.210 189497 DEBUG nova.scheduler.client.report [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Refreshing aggregate associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.242 189497 DEBUG nova.scheduler.client.report [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Refreshing trait associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX2,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.299 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.301 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4771MB free_disk=72.13198852539062GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.301 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.320 189497 DEBUG nova.compute.provider_tree [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.383 189497 DEBUG nova.scheduler.client.report [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.452 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.456 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.504 189497 INFO nova.scheduler.client.report [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Deleted allocations for instance 44ac2ce0-9161-4b3c-baf9-be45585c5f0e
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.559 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.559 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.559 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.560 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.579 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.641 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.655 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.681 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.682 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.682 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.682 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 09 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:04:47 compute-0 nova_compute[189493]: 2025-12-09 11:04:47.559 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:48 compute-0 nova_compute[189493]: 2025-12-09 11:04:48.121 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:49 compute-0 nova_compute[189493]: 2025-12-09 11:04:49.856 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:04:49 compute-0 nova_compute[189493]: 2025-12-09 11:04:49.857 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 11:04:50 compute-0 podman[247147]: 2025-12-09 11:04:50.996702686 +0000 UTC m=+0.134950742 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 09 11:04:52 compute-0 nova_compute[189493]: 2025-12-09 11:04:52.564 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:53 compute-0 nova_compute[189493]: 2025-12-09 11:04:53.125 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:54 compute-0 podman[247168]: 2025-12-09 11:04:54.980242804 +0000 UTC m=+0.121612115 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 11:04:57 compute-0 nova_compute[189493]: 2025-12-09 11:04:57.569 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:58 compute-0 nova_compute[189493]: 2025-12-09 11:04:58.128 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:04:59 compute-0 podman[203687]: time="2025-12-09T11:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:04:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 11:04:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Dec 09 11:04:59 compute-0 nova_compute[189493]: 2025-12-09 11:04:59.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:04:59 compute-0 nova_compute[189493]: 2025-12-09 11:04:59.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 09 11:04:59 compute-0 nova_compute[189493]: 2025-12-09 11:04:59.867 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 09 11:04:59 compute-0 podman[247189]: 2025-12-09 11:04:59.98941059 +0000 UTC m=+0.122585812 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.buildah.version=1.29.0, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, architecture=x86_64, name=ubi9, io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, version=9.4)
Dec 09 11:05:00 compute-0 podman[247190]: 2025-12-09 11:05:00.037933716 +0000 UTC m=+0.163009342 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 09 11:05:00 compute-0 nova_compute[189493]: 2025-12-09 11:05:00.465 189497 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765278285.4533355, 44ac2ce0-9161-4b3c-baf9-be45585c5f0e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 11:05:00 compute-0 nova_compute[189493]: 2025-12-09 11:05:00.466 189497 INFO nova.compute.manager [-] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] VM Stopped (Lifecycle Event)
Dec 09 11:05:00 compute-0 nova_compute[189493]: 2025-12-09 11:05:00.498 189497 DEBUG nova.compute.manager [None req-c7990f40-4c23-4d8b-8748-81fc154ada98 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 11:05:01 compute-0 openstack_network_exporter[205823]: ERROR   11:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:05:01 compute-0 openstack_network_exporter[205823]: ERROR   11:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:05:01 compute-0 openstack_network_exporter[205823]: ERROR   11:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:05:01 compute-0 openstack_network_exporter[205823]: ERROR   11:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:05:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:05:01 compute-0 openstack_network_exporter[205823]: ERROR   11:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:05:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:05:02 compute-0 sshd-session[246513]: Received disconnect from 38.102.83.145 port 57116:11: disconnected by user
Dec 09 11:05:02 compute-0 sshd-session[246513]: Disconnected from user zuul 38.102.83.145 port 57116
Dec 09 11:05:02 compute-0 sshd-session[246489]: pam_unix(sshd:session): session closed for user zuul
Dec 09 11:05:02 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Dec 09 11:05:02 compute-0 systemd[1]: session-30.scope: Consumed 1.209s CPU time.
Dec 09 11:05:02 compute-0 systemd-logind[806]: Session 30 logged out. Waiting for processes to exit.
Dec 09 11:05:02 compute-0 systemd-logind[806]: Removed session 30.
Dec 09 11:05:02 compute-0 podman[247228]: 2025-12-09 11:05:02.400015287 +0000 UTC m=+0.106928474 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 09 11:05:02 compute-0 podman[247227]: 2025-12-09 11:05:02.40274626 +0000 UTC m=+0.113802749 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 09 11:05:02 compute-0 nova_compute[189493]: 2025-12-09 11:05:02.571 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:03 compute-0 nova_compute[189493]: 2025-12-09 11:05:03.131 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:07 compute-0 nova_compute[189493]: 2025-12-09 11:05:07.575 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:08 compute-0 nova_compute[189493]: 2025-12-09 11:05:08.135 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:08 compute-0 rsyslogd[236818]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 09 11:05:09 compute-0 podman[247266]: 2025-12-09 11:05:09.989917203 +0000 UTC m=+0.125683805 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal, vcs-type=git, io.buildah.version=1.33.7, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Dec 09 11:05:10 compute-0 sshd-session[247287]: Invalid user docker from 159.223.8.217 port 42776
Dec 09 11:05:11 compute-0 sshd-session[247287]: Connection closed by invalid user docker 159.223.8.217 port 42776 [preauth]
Dec 09 11:05:12 compute-0 podman[247289]: 2025-12-09 11:05:12.065214411 +0000 UTC m=+0.198257571 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 11:05:12 compute-0 nova_compute[189493]: 2025-12-09 11:05:12.577 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:12 compute-0 podman[247315]: 2025-12-09 11:05:12.960678997 +0000 UTC m=+0.105816875 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 11:05:13 compute-0 nova_compute[189493]: 2025-12-09 11:05:13.139 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:05:16.998 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:05:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:05:17.001 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:05:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:05:17.003 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:05:17 compute-0 nova_compute[189493]: 2025-12-09 11:05:17.579 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:18 compute-0 nova_compute[189493]: 2025-12-09 11:05:18.142 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:21 compute-0 podman[247340]: 2025-12-09 11:05:21.973649168 +0000 UTC m=+0.115721728 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Dec 09 11:05:22 compute-0 nova_compute[189493]: 2025-12-09 11:05:22.582 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:23 compute-0 nova_compute[189493]: 2025-12-09 11:05:23.145 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:23 compute-0 sshd-session[247361]: Accepted publickey for zuul from 38.102.83.145 port 59018 ssh2: RSA SHA256:OoA6ymXz1bGWu/N8aYc4tZBvI5ffrgdXcLpAm+SU/Q8
Dec 09 11:05:23 compute-0 systemd-logind[806]: New session 31 of user zuul.
Dec 09 11:05:23 compute-0 systemd[1]: Started Session 31 of User zuul.
Dec 09 11:05:23 compute-0 sshd-session[247361]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:05:24 compute-0 sudo[247538]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsglmsztikatffyjozeskibsqgnjsetf ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765278324.0429678-60280-36409721241962/AnsiballZ_command.py'
Dec 09 11:05:24 compute-0 sudo[247538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:05:24 compute-0 python3[247540]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:05:25 compute-0 sudo[247538]: pam_unix(sudo:session): session closed for user root
Dec 09 11:05:25 compute-0 podman[247578]: 2025-12-09 11:05:25.950969858 +0000 UTC m=+0.093120432 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 11:05:27 compute-0 nova_compute[189493]: 2025-12-09 11:05:27.584 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:28 compute-0 nova_compute[189493]: 2025-12-09 11:05:28.148 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:29 compute-0 podman[203687]: time="2025-12-09T11:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:05:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 11:05:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec 09 11:05:30 compute-0 podman[247601]: 2025-12-09 11:05:30.943020616 +0000 UTC m=+0.090071962 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., release-0.7.12=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, distribution-scope=public, name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec 09 11:05:30 compute-0 podman[247602]: 2025-12-09 11:05:30.97270504 +0000 UTC m=+0.117578369 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 09 11:05:31 compute-0 openstack_network_exporter[205823]: ERROR   11:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:05:31 compute-0 openstack_network_exporter[205823]: ERROR   11:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:05:31 compute-0 openstack_network_exporter[205823]: ERROR   11:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:05:31 compute-0 openstack_network_exporter[205823]: ERROR   11:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:05:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:05:31 compute-0 openstack_network_exporter[205823]: ERROR   11:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:05:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:05:32 compute-0 nova_compute[189493]: 2025-12-09 11:05:32.586 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:32 compute-0 podman[247729]: 2025-12-09 11:05:32.986701999 +0000 UTC m=+0.123130883 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 11:05:32 compute-0 podman[247722]: 2025-12-09 11:05:32.988869636 +0000 UTC m=+0.132421016 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 09 11:05:33 compute-0 nova_compute[189493]: 2025-12-09 11:05:33.151 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:33 compute-0 sudo[247845]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwcwikxyqdsgwaqnsdhryqgyuwjzqlri ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765278332.5837038-60441-56464019281109/AnsiballZ_command.py'
Dec 09 11:05:33 compute-0 sudo[247845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:05:33 compute-0 python3[247847]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:05:33 compute-0 sudo[247845]: pam_unix(sudo:session): session closed for user root
Dec 09 11:05:36 compute-0 nova_compute[189493]: 2025-12-09 11:05:36.863 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:05:36 compute-0 nova_compute[189493]: 2025-12-09 11:05:36.864 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:05:36 compute-0 nova_compute[189493]: 2025-12-09 11:05:36.890 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:05:36 compute-0 nova_compute[189493]: 2025-12-09 11:05:36.892 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:05:36 compute-0 nova_compute[189493]: 2025-12-09 11:05:36.893 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:05:37 compute-0 nova_compute[189493]: 2025-12-09 11:05:37.587 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:38 compute-0 nova_compute[189493]: 2025-12-09 11:05:38.155 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:38 compute-0 sshd-session[247886]: Invalid user docker from 159.223.8.217 port 40496
Dec 09 11:05:38 compute-0 sshd-session[247886]: Connection closed by invalid user docker 159.223.8.217 port 40496 [preauth]
Dec 09 11:05:39 compute-0 nova_compute[189493]: 2025-12-09 11:05:39.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:05:40 compute-0 podman[247888]: 2025-12-09 11:05:40.962553747 +0000 UTC m=+0.111801588 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, managed_by=edpm_ansible)
Dec 09 11:05:42 compute-0 nova_compute[189493]: 2025-12-09 11:05:42.590 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:43 compute-0 nova_compute[189493]: 2025-12-09 11:05:43.186 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:43 compute-0 nova_compute[189493]: 2025-12-09 11:05:43.188 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:05:43 compute-0 nova_compute[189493]: 2025-12-09 11:05:43.188 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 11:05:43 compute-0 nova_compute[189493]: 2025-12-09 11:05:43.188 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 11:05:43 compute-0 podman[247961]: 2025-12-09 11:05:43.291128115 +0000 UTC m=+0.074810113 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 11:05:43 compute-0 podman[247937]: 2025-12-09 11:05:43.356065778 +0000 UTC m=+0.142839247 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 09 11:05:43 compute-0 sudo[248126]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfdkmgtustbwiwsqhcdntoavailsodbz ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765278342.7869933-60594-230724582629639/AnsiballZ_command.py'
Dec 09 11:05:43 compute-0 sudo[248126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:05:43 compute-0 nova_compute[189493]: 2025-12-09 11:05:43.870 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 11:05:43 compute-0 nova_compute[189493]: 2025-12-09 11:05:43.871 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 11:05:43 compute-0 nova_compute[189493]: 2025-12-09 11:05:43.871 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 11:05:43 compute-0 nova_compute[189493]: 2025-12-09 11:05:43.872 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 11:05:43 compute-0 python3[248128]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:05:44 compute-0 sudo[248126]: pam_unix(sudo:session): session closed for user root
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.176 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.196 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.197 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.198 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.198 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.227 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.228 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.228 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.229 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.344 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.455 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.457 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.557 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.559 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.593 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.626 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.628 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.706 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.715 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.795 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.796 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.891 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.892 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.950 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.951 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.036 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.190 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.454 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.455 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4838MB free_disk=72.13289260864258GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.456 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.456 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.659 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.660 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.660 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.661 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.726 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.739 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.755 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.756 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:05:52 compute-0 nova_compute[189493]: 2025-12-09 11:05:52.596 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:52 compute-0 podman[248194]: 2025-12-09 11:05:52.935643977 +0000 UTC m=+0.090361868 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 09 11:05:53 compute-0 nova_compute[189493]: 2025-12-09 11:05:53.193 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:53 compute-0 nova_compute[189493]: 2025-12-09 11:05:53.400 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:05:53 compute-0 nova_compute[189493]: 2025-12-09 11:05:53.401 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 11:05:56 compute-0 podman[248215]: 2025-12-09 11:05:56.955238139 +0000 UTC m=+0.102019732 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 11:05:57 compute-0 nova_compute[189493]: 2025-12-09 11:05:57.597 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:58 compute-0 nova_compute[189493]: 2025-12-09 11:05:58.196 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:05:59 compute-0 podman[203687]: time="2025-12-09T11:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:05:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 11:05:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec 09 11:06:00 compute-0 sudo[248409]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxiohpdpvmezzfkbybztssccmzzqfpjh ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765278359.470956-60811-275166869580579/AnsiballZ_command.py'
Dec 09 11:06:00 compute-0 sudo[248409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:06:00 compute-0 python3[248411]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 09 11:06:00 compute-0 sudo[248409]: pam_unix(sudo:session): session closed for user root
Dec 09 11:06:01 compute-0 openstack_network_exporter[205823]: ERROR   11:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:06:01 compute-0 openstack_network_exporter[205823]: ERROR   11:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:06:01 compute-0 openstack_network_exporter[205823]: ERROR   11:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:06:01 compute-0 openstack_network_exporter[205823]: ERROR   11:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:06:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:06:01 compute-0 openstack_network_exporter[205823]: ERROR   11:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:06:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:06:01 compute-0 podman[248450]: 2025-12-09 11:06:01.964174177 +0000 UTC m=+0.112723381 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, io.openshift.expose-services=, config_id=edpm, release=1214.1726694543, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 09 11:06:01 compute-0 podman[248451]: 2025-12-09 11:06:01.96427078 +0000 UTC m=+0.106214252 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 09 11:06:02 compute-0 nova_compute[189493]: 2025-12-09 11:06:02.600 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:03 compute-0 nova_compute[189493]: 2025-12-09 11:06:03.198 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:03 compute-0 podman[248488]: 2025-12-09 11:06:03.925523384 +0000 UTC m=+0.079976557 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec 09 11:06:03 compute-0 podman[248487]: 2025-12-09 11:06:03.942696172 +0000 UTC m=+0.101949251 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 09 11:06:06 compute-0 sshd-session[248528]: Invalid user docker from 159.223.8.217 port 50316
Dec 09 11:06:06 compute-0 sshd-session[248528]: Connection closed by invalid user docker 159.223.8.217 port 50316 [preauth]
Dec 09 11:06:07 compute-0 nova_compute[189493]: 2025-12-09 11:06:07.603 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:08 compute-0 nova_compute[189493]: 2025-12-09 11:06:08.202 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:12 compute-0 podman[248530]: 2025-12-09 11:06:12.018537065 +0000 UTC m=+0.158967878 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, name=ubi9-minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 09 11:06:12 compute-0 nova_compute[189493]: 2025-12-09 11:06:12.607 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:13 compute-0 nova_compute[189493]: 2025-12-09 11:06:13.205 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:13 compute-0 podman[248551]: 2025-12-09 11:06:13.99597039 +0000 UTC m=+0.138964686 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 11:06:14 compute-0 podman[248552]: 2025-12-09 11:06:14.02742438 +0000 UTC m=+0.165136059 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 09 11:06:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:06:17.001 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:06:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:06:17.003 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:06:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:06:17.004 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:06:17 compute-0 nova_compute[189493]: 2025-12-09 11:06:17.610 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:18 compute-0 nova_compute[189493]: 2025-12-09 11:06:18.209 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:22 compute-0 nova_compute[189493]: 2025-12-09 11:06:22.613 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:23 compute-0 nova_compute[189493]: 2025-12-09 11:06:23.213 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.297 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.299 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.314 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7b43ca09-ed65-4465-9fcc-95caa6dc9a88', 'name': 'vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.321 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.321 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.322 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.324 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T11:06:23.322409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.332 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.340 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.342 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.342 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.343 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T11:06:23.343192) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.391 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.391 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.392 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.429 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.429 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.430 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.430 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.430 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.430 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.430 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.431 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.431 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.431 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.431 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.432 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.432 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.432 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T11:06:23.431113) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.432 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.433 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.433 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T11:06:23.432812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.434 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.434 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.434 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T11:06:23.434844) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.516 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.517 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.517 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.630 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.631 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.631 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.632 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.632 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.632 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.632 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.632 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.633 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.633 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.634 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.634 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.634 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.634 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T11:06:23.632922) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.635 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 492966519 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.635 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T11:06:23.634837) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.635 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 88653492 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.635 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 59040938 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.636 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.636 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.636 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.637 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.637 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.637 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.637 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.637 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.637 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.638 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T11:06:23.637664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.638 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.638 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.638 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.639 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.639 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.639 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.640 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.640 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.640 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.640 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.640 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.641 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.641 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.641 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.641 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.642 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.642 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.643 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T11:06:23.640473) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.643 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.643 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.643 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.644 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T11:06:23.643852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.675 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/cpu volume: 39850000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.707 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 48790000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.709 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.709 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.710 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.710 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.711 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.711 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T11:06:23.709657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.711 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.712 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.712 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.713 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.713 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.714 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.714 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.714 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T11:06:23.714451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.715 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.715 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.716 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.716 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.717 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.718 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.719 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 2223058984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T11:06:23.719303) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.720 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 10632793 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.720 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.721 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.721 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.721 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.722 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.723 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.723 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.723 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.724 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.724 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.725 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T11:06:23.723874) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.726 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.726 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.726 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.726 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.727 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T11:06:23.726617) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.727 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.727 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.728 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.728 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.728 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.729 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.729 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.729 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.730 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.729 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T11:06:23.729484) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.730 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.730 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.731 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.731 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.732 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.733 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.733 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.734 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.734 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.734 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.734 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T11:06:23.734299) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.735 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.736 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.736 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.736 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.736 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.736 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.737 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.738 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.738 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.738 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.738 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.739 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.739 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.739 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T11:06:23.737022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.740 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.740 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.740 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T11:06:23.740168) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.741 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.741 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.741 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.742 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.742 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.742 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.742 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.743 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T11:06:23.742379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.743 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.743 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.743 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.744 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.744 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.744 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.744 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.744 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.745 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.745 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.745 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.745 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.745 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.746 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.746 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.746 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.746 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.747 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.747 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.747 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.747 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.747 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.747 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.747 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.748 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T11:06:23.744247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T11:06:23.746047) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.750 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.750 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.750 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.750 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T11:06:23.747422) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T11:06:23.749470) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:06:23 compute-0 podman[248601]: 2025-12-09 11:06:23.994100917 +0000 UTC m=+0.139951032 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 09 11:06:27 compute-0 nova_compute[189493]: 2025-12-09 11:06:27.615 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:27 compute-0 podman[248620]: 2025-12-09 11:06:27.977170495 +0000 UTC m=+0.120846733 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 09 11:06:28 compute-0 nova_compute[189493]: 2025-12-09 11:06:28.216 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:28 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 09 11:06:29 compute-0 podman[203687]: time="2025-12-09T11:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:06:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 11:06:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec 09 11:06:31 compute-0 openstack_network_exporter[205823]: ERROR   11:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:06:31 compute-0 openstack_network_exporter[205823]: ERROR   11:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:06:31 compute-0 openstack_network_exporter[205823]: ERROR   11:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:06:31 compute-0 openstack_network_exporter[205823]: ERROR   11:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:06:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:06:31 compute-0 openstack_network_exporter[205823]: ERROR   11:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:06:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:06:32 compute-0 nova_compute[189493]: 2025-12-09 11:06:32.618 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:32 compute-0 podman[248646]: 2025-12-09 11:06:32.972385218 +0000 UTC m=+0.111861480 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec 09 11:06:32 compute-0 podman[248645]: 2025-12-09 11:06:32.979883474 +0000 UTC m=+0.122939799 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, vcs-type=git, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., container_name=kepler, distribution-scope=public, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_id=edpm, io.openshift.tags=base rhel9, version=9.4, io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 09 11:06:33 compute-0 nova_compute[189493]: 2025-12-09 11:06:33.219 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:33 compute-0 sshd-session[248682]: Invalid user docker from 159.223.8.217 port 50546
Dec 09 11:06:33 compute-0 sshd-session[248682]: Connection closed by invalid user docker 159.223.8.217 port 50546 [preauth]
Dec 09 11:06:34 compute-0 podman[248684]: 2025-12-09 11:06:34.981550891 +0000 UTC m=+0.117032034 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 09 11:06:34 compute-0 podman[248685]: 2025-12-09 11:06:34.993896724 +0000 UTC m=+0.125211408 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 09 11:06:36 compute-0 nova_compute[189493]: 2025-12-09 11:06:36.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:06:37 compute-0 nova_compute[189493]: 2025-12-09 11:06:37.621 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:37 compute-0 nova_compute[189493]: 2025-12-09 11:06:37.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:06:37 compute-0 nova_compute[189493]: 2025-12-09 11:06:37.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:06:38 compute-0 nova_compute[189493]: 2025-12-09 11:06:38.221 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:38 compute-0 nova_compute[189493]: 2025-12-09 11:06:38.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:06:41 compute-0 nova_compute[189493]: 2025-12-09 11:06:41.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:06:42 compute-0 nova_compute[189493]: 2025-12-09 11:06:42.623 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:42 compute-0 podman[248720]: 2025-12-09 11:06:42.961165325 +0000 UTC m=+0.109651431 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, distribution-scope=public, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 09 11:06:43 compute-0 nova_compute[189493]: 2025-12-09 11:06:43.223 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:44 compute-0 podman[248741]: 2025-12-09 11:06:44.804951483 +0000 UTC m=+0.086013225 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 11:06:44 compute-0 nova_compute[189493]: 2025-12-09 11:06:44.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:06:44 compute-0 nova_compute[189493]: 2025-12-09 11:06:44.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 11:06:44 compute-0 podman[248742]: 2025-12-09 11:06:44.843190761 +0000 UTC m=+0.123963436 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 11:06:45 compute-0 nova_compute[189493]: 2025-12-09 11:06:45.027 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 11:06:45 compute-0 nova_compute[189493]: 2025-12-09 11:06:45.028 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 11:06:45 compute-0 nova_compute[189493]: 2025-12-09 11:06:45.028 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.343 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updating instance_info_cache with network_info: [{"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.366 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.366 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.367 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.367 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.395 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.395 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.396 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.396 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.517 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.594 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.595 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.655 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.655 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.717 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.718 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.774 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.781 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.838 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.839 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.922 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.924 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.983 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.984 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.043 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.416 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.417 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4853MB free_disk=72.13289260864258GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.417 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.417 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.505 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.506 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.506 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.506 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.598 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.617 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.619 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.620 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.626 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:48 compute-0 nova_compute[189493]: 2025-12-09 11:06:48.226 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:52 compute-0 nova_compute[189493]: 2025-12-09 11:06:52.629 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:53 compute-0 nova_compute[189493]: 2025-12-09 11:06:53.094 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:06:53 compute-0 nova_compute[189493]: 2025-12-09 11:06:53.095 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 11:06:53 compute-0 nova_compute[189493]: 2025-12-09 11:06:53.228 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:54 compute-0 podman[248816]: 2025-12-09 11:06:54.938435045 +0000 UTC m=+0.086754084 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251202, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 09 11:06:57 compute-0 nova_compute[189493]: 2025-12-09 11:06:57.632 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:58 compute-0 nova_compute[189493]: 2025-12-09 11:06:58.231 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:06:58 compute-0 podman[248837]: 2025-12-09 11:06:58.95737613 +0000 UTC m=+0.103773168 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 09 11:06:59 compute-0 podman[203687]: time="2025-12-09T11:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:06:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 11:06:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec 09 11:06:59 compute-0 sshd-session[248860]: Invalid user docker from 159.223.8.217 port 57302
Dec 09 11:07:00 compute-0 sshd-session[248860]: Connection closed by invalid user docker 159.223.8.217 port 57302 [preauth]
Dec 09 11:07:00 compute-0 sshd-session[247364]: Received disconnect from 38.102.83.145 port 59018:11: disconnected by user
Dec 09 11:07:00 compute-0 sshd-session[247364]: Disconnected from user zuul 38.102.83.145 port 59018
Dec 09 11:07:00 compute-0 sshd-session[247361]: pam_unix(sshd:session): session closed for user zuul
Dec 09 11:07:00 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Dec 09 11:07:00 compute-0 systemd[1]: session-31.scope: Consumed 4.766s CPU time.
Dec 09 11:07:00 compute-0 systemd-logind[806]: Session 31 logged out. Waiting for processes to exit.
Dec 09 11:07:00 compute-0 systemd-logind[806]: Removed session 31.
Dec 09 11:07:01 compute-0 openstack_network_exporter[205823]: ERROR   11:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:07:01 compute-0 openstack_network_exporter[205823]: ERROR   11:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:07:01 compute-0 openstack_network_exporter[205823]: ERROR   11:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:07:01 compute-0 openstack_network_exporter[205823]: ERROR   11:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:07:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:07:01 compute-0 openstack_network_exporter[205823]: ERROR   11:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:07:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:07:02 compute-0 nova_compute[189493]: 2025-12-09 11:07:02.635 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:03 compute-0 nova_compute[189493]: 2025-12-09 11:07:03.234 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:03 compute-0 podman[248862]: 2025-12-09 11:07:03.967723707 +0000 UTC m=+0.113285196 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, maintainer=Red Hat, Inc., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, managed_by=edpm_ansible, version=9.4)
Dec 09 11:07:03 compute-0 podman[248863]: 2025-12-09 11:07:03.985030059 +0000 UTC m=+0.115793232 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec 09 11:07:05 compute-0 podman[248901]: 2025-12-09 11:07:05.916520107 +0000 UTC m=+0.065610673 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec 09 11:07:05 compute-0 podman[248900]: 2025-12-09 11:07:05.934333581 +0000 UTC m=+0.076498586 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 09 11:07:07 compute-0 nova_compute[189493]: 2025-12-09 11:07:07.640 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:08 compute-0 nova_compute[189493]: 2025-12-09 11:07:08.237 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:12 compute-0 nova_compute[189493]: 2025-12-09 11:07:12.643 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:13 compute-0 nova_compute[189493]: 2025-12-09 11:07:13.240 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:13 compute-0 podman[248938]: 2025-12-09 11:07:13.971941033 +0000 UTC m=+0.104968400 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, architecture=x86_64, name=ubi9-minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 09 11:07:14 compute-0 podman[248958]: 2025-12-09 11:07:14.934627366 +0000 UTC m=+0.081061605 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 11:07:15 compute-0 podman[248981]: 2025-12-09 11:07:15.149904863 +0000 UTC m=+0.181858866 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 09 11:07:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:07:17.003 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:07:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:07:17.004 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:07:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:07:17.005 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:07:17 compute-0 nova_compute[189493]: 2025-12-09 11:07:17.647 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:18 compute-0 nova_compute[189493]: 2025-12-09 11:07:18.243 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:22 compute-0 nova_compute[189493]: 2025-12-09 11:07:22.649 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:23 compute-0 nova_compute[189493]: 2025-12-09 11:07:23.246 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:25 compute-0 podman[249007]: 2025-12-09 11:07:25.944333737 +0000 UTC m=+0.089851465 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 09 11:07:27 compute-0 nova_compute[189493]: 2025-12-09 11:07:27.651 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:27 compute-0 sshd-session[249025]: Invalid user docker from 159.223.8.217 port 38916
Dec 09 11:07:27 compute-0 sshd-session[249025]: Connection closed by invalid user docker 159.223.8.217 port 38916 [preauth]
Dec 09 11:07:28 compute-0 nova_compute[189493]: 2025-12-09 11:07:28.249 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:29 compute-0 podman[203687]: time="2025-12-09T11:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:07:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 11:07:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Dec 09 11:07:29 compute-0 podman[249027]: 2025-12-09 11:07:29.931572043 +0000 UTC m=+0.086391135 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 11:07:31 compute-0 openstack_network_exporter[205823]: ERROR   11:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:07:31 compute-0 openstack_network_exporter[205823]: ERROR   11:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:07:31 compute-0 openstack_network_exporter[205823]: ERROR   11:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:07:31 compute-0 openstack_network_exporter[205823]: ERROR   11:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:07:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:07:31 compute-0 openstack_network_exporter[205823]: ERROR   11:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:07:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:07:32 compute-0 nova_compute[189493]: 2025-12-09 11:07:32.654 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:33 compute-0 nova_compute[189493]: 2025-12-09 11:07:33.251 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:34 compute-0 podman[249051]: 2025-12-09 11:07:34.96193125 +0000 UTC m=+0.101867828 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 11:07:34 compute-0 podman[249050]: 2025-12-09 11:07:34.971540811 +0000 UTC m=+0.118621816 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release=1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Dec 09 11:07:36 compute-0 podman[249089]: 2025-12-09 11:07:36.94523741 +0000 UTC m=+0.089343872 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 09 11:07:36 compute-0 podman[249090]: 2025-12-09 11:07:36.968686342 +0000 UTC m=+0.107930257 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec 09 11:07:37 compute-0 nova_compute[189493]: 2025-12-09 11:07:37.657 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:37 compute-0 nova_compute[189493]: 2025-12-09 11:07:37.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:07:38 compute-0 nova_compute[189493]: 2025-12-09 11:07:38.254 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:38 compute-0 nova_compute[189493]: 2025-12-09 11:07:38.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:07:38 compute-0 nova_compute[189493]: 2025-12-09 11:07:38.844 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:07:39 compute-0 nova_compute[189493]: 2025-12-09 11:07:39.838 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:07:39 compute-0 nova_compute[189493]: 2025-12-09 11:07:39.838 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:07:42 compute-0 nova_compute[189493]: 2025-12-09 11:07:42.660 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:43 compute-0 nova_compute[189493]: 2025-12-09 11:07:43.257 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:43 compute-0 nova_compute[189493]: 2025-12-09 11:07:43.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:07:44 compute-0 podman[249126]: 2025-12-09 11:07:44.805644447 +0000 UTC m=+0.117418964 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 09 11:07:45 compute-0 nova_compute[189493]: 2025-12-09 11:07:45.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:07:45 compute-0 nova_compute[189493]: 2025-12-09 11:07:45.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:07:45 compute-0 nova_compute[189493]: 2025-12-09 11:07:45.880 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:07:45 compute-0 nova_compute[189493]: 2025-12-09 11:07:45.880 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:07:45 compute-0 nova_compute[189493]: 2025-12-09 11:07:45.881 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:07:45 compute-0 nova_compute[189493]: 2025-12-09 11:07:45.881 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 11:07:45 compute-0 podman[249147]: 2025-12-09 11:07:45.931085178 +0000 UTC m=+0.085432480 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 11:07:45 compute-0 nova_compute[189493]: 2025-12-09 11:07:45.974 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:07:45 compute-0 podman[249148]: 2025-12-09 11:07:45.978653499 +0000 UTC m=+0.134764047 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.036 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.038 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.110 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.112 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.190 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.191 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.254 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.265 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.326 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.327 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.403 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.408 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.508 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.510 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.581 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.976 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.978 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4831MB free_disk=72.13315963745117GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.978 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.978 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.078 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.079 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 09 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.079 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.079 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.140 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.166 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.168 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.168 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.663 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:48 compute-0 nova_compute[189493]: 2025-12-09 11:07:48.169 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:07:48 compute-0 nova_compute[189493]: 2025-12-09 11:07:48.170 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 11:07:48 compute-0 nova_compute[189493]: 2025-12-09 11:07:48.170 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 11:07:48 compute-0 nova_compute[189493]: 2025-12-09 11:07:48.259 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:49 compute-0 nova_compute[189493]: 2025-12-09 11:07:49.320 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 11:07:49 compute-0 nova_compute[189493]: 2025-12-09 11:07:49.321 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 11:07:49 compute-0 nova_compute[189493]: 2025-12-09 11:07:49.321 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 09 11:07:49 compute-0 nova_compute[189493]: 2025-12-09 11:07:49.322 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 11:07:51 compute-0 nova_compute[189493]: 2025-12-09 11:07:51.395 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 11:07:51 compute-0 nova_compute[189493]: 2025-12-09 11:07:51.423 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 11:07:51 compute-0 nova_compute[189493]: 2025-12-09 11:07:51.424 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 09 11:07:52 compute-0 nova_compute[189493]: 2025-12-09 11:07:52.665 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:52 compute-0 nova_compute[189493]: 2025-12-09 11:07:52.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:07:52 compute-0 nova_compute[189493]: 2025-12-09 11:07:52.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 11:07:53 compute-0 nova_compute[189493]: 2025-12-09 11:07:53.261 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:55 compute-0 sshd-session[249220]: Invalid user docker from 159.223.8.217 port 53596
Dec 09 11:07:56 compute-0 sshd-session[249220]: Connection closed by invalid user docker 159.223.8.217 port 53596 [preauth]
Dec 09 11:07:57 compute-0 podman[249222]: 2025-12-09 11:07:57.005008891 +0000 UTC m=+0.137259761 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Dec 09 11:07:57 compute-0 nova_compute[189493]: 2025-12-09 11:07:57.668 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:58 compute-0 nova_compute[189493]: 2025-12-09 11:07:58.264 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:07:59 compute-0 podman[203687]: time="2025-12-09T11:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:07:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec 09 11:07:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Dec 09 11:08:00 compute-0 podman[249240]: 2025-12-09 11:08:00.944673507 +0000 UTC m=+0.089326481 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 11:08:01 compute-0 openstack_network_exporter[205823]: ERROR   11:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:08:01 compute-0 openstack_network_exporter[205823]: ERROR   11:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:08:01 compute-0 openstack_network_exporter[205823]: ERROR   11:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:08:01 compute-0 openstack_network_exporter[205823]: ERROR   11:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:08:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:08:01 compute-0 openstack_network_exporter[205823]: ERROR   11:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:08:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:08:02 compute-0 nova_compute[189493]: 2025-12-09 11:08:02.672 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:03 compute-0 nova_compute[189493]: 2025-12-09 11:08:03.267 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:04 compute-0 nova_compute[189493]: 2025-12-09 11:08:04.996 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:08:04 compute-0 nova_compute[189493]: 2025-12-09 11:08:04.997 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:08:04 compute-0 nova_compute[189493]: 2025-12-09 11:08:04.998 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:08:04 compute-0 nova_compute[189493]: 2025-12-09 11:08:04.999 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:04.999 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.001 189497 INFO nova.compute.manager [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Terminating instance
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.003 189497 DEBUG nova.compute.manager [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 09 11:08:05 compute-0 kernel: tapb903bb84-e1 (unregistering): left promiscuous mode
Dec 09 11:08:05 compute-0 NetworkManager[56302]: <info>  [1765278485.0621] device (tapb903bb84-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.072 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:05 compute-0 ovn_controller[97780]: 2025-12-09T11:08:05Z|00058|binding|INFO|Releasing lport b903bb84-e176-4730-b223-613a9b01712b from this chassis (sb_readonly=0)
Dec 09 11:08:05 compute-0 ovn_controller[97780]: 2025-12-09T11:08:05Z|00059|binding|INFO|Setting lport b903bb84-e176-4730-b223-613a9b01712b down in Southbound
Dec 09 11:08:05 compute-0 ovn_controller[97780]: 2025-12-09T11:08:05Z|00060|binding|INFO|Removing iface tapb903bb84-e1 ovn-installed in OVS
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.082 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.086 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:d3:f4 192.168.0.92'], port_security=['fa:16:3e:91:d3:f4 192.168.0.92'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5eiooafn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-port-rb2sbixhbgrm', 'neutron:cidrs': '192.168.0.92/24', 'neutron:device_id': '7b43ca09-ed65-4465-9fcc-95caa6dc9a88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5af7354-5afe-400a-9e13-5500648117d8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5eiooafn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-port-rb2sbixhbgrm', 'neutron:project_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd86dfae4-cfd5-480d-a50e-0084326b1439', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.176', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61df917c-633f-4b35-857d-39fd859caf35, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], logical_port=b903bb84-e176-4730-b223-613a9b01712b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.087 106644 INFO neutron.agent.ovn.metadata.agent [-] Port b903bb84-e176-4730-b223-613a9b01712b in datapath c5af7354-5afe-400a-9e13-5500648117d8 unbound from our chassis
Dec 09 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.089 106644 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5af7354-5afe-400a-9e13-5500648117d8
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.099 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:05 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec 09 11:08:05 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 2min 5.435s CPU time.
Dec 09 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.108 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[7dff2a87-d27c-4aaa-b8fd-c93e902cefa7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:08:05 compute-0 systemd-machined[155790]: Machine qemu-4-instance-00000004 terminated.
Dec 09 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.147 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[c9db50f6-bada-4d68-a138-d6ae2d713231]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.150 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[6c09d34e-7031-4a2e-90e6-df0515ef1ea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:08:05 compute-0 podman[249266]: 2025-12-09 11:08:05.171633195 +0000 UTC m=+0.084397603 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, managed_by=edpm_ansible, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543)
Dec 09 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.178 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[f93807fd-b508-4ef8-9a2a-300fc3a59ecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.198 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd3d124-aaa8-4623-ab9c-e3845e2126f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5af7354-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:0d:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 15, 'rx_bytes': 742, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 15, 'rx_bytes': 742, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396027, 'reachable_time': 22416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249315, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:08:05 compute-0 podman[249267]: 2025-12-09 11:08:05.211451393 +0000 UTC m=+0.108273545 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 09 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.214 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[d904e5e7-b1e6-4344-b229-5e8c1a116758]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396043, 'tstamp': 396043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249316, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396046, 'tstamp': 396046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249316, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.215 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5af7354-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.217 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.224 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.225 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5af7354-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.225 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 09 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.226 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5af7354-50, col_values=(('external_ids', {'iface-id': '3eb47070-bc26-4827-a5a8-68152f05129c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.226 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.232 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.238 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.309 189497 INFO nova.virt.libvirt.driver [-] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Instance destroyed successfully.
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.309 189497 DEBUG nova.objects.instance [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'resources' on Instance uuid 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.336 189497 DEBUG nova.virt.libvirt.vif [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-09T10:57:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq',id=4,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-09T10:57:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-d2fjtx7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-09T10:57:39Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTEwNDAxNzQ2NjMwMjU3NjgzNjI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTA0MDE3NDY2MzAyNTc2ODM2Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTEwNDAxNzQ2NjMwMjU3NjgzNjI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Dec 09 11:08:05 compute-0 nova_compute[189493]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTA0MDE3NDY2MzAyNTc2ODM2Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTEwNDAxNzQ2NjMwMjU3NjgzNjI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0tLQo=',user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=7b43ca09-ed65-4465-9fcc-95caa6dc9a88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.337 189497 DEBUG nova.network.os_vif_util [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.338 189497 DEBUG nova.network.os_vif_util [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:91:d3:f4,bridge_name='br-int',has_traffic_filtering=True,id=b903bb84-e176-4730-b223-613a9b01712b,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb903bb84-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.339 189497 DEBUG os_vif [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:91:d3:f4,bridge_name='br-int',has_traffic_filtering=True,id=b903bb84-e176-4730-b223-613a9b01712b,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb903bb84-e1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.343 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.344 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb903bb84-e1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.347 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.348 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.349 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.357 189497 INFO os_vif [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:91:d3:f4,bridge_name='br-int',has_traffic_filtering=True,id=b903bb84-e176-4730-b223-613a9b01712b,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb903bb84-e1')
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.358 189497 INFO nova.virt.libvirt.driver [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Deleting instance files /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88_del
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.360 189497 INFO nova.virt.libvirt.driver [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Deletion of /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88_del complete
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.420 189497 INFO nova.compute.manager [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Took 0.42 seconds to destroy the instance on the hypervisor.
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.421 189497 DEBUG oslo.service.loopingcall [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.422 189497 DEBUG nova.compute.manager [-] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 09 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.423 189497 DEBUG nova.network.neutron [-] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 09 11:08:05 compute-0 rsyslogd[236818]: message too long (8192) with configured size 8096, begin of message is: 2025-12-09 11:08:05.336 189497 DEBUG nova.virt.libvirt.vif [None req-ab479ce5-31 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.159 189497 DEBUG nova.compute.manager [req-55d2dae8-a5ed-4886-bed3-4990fd13a247 req-c2cff718-1206-4ceb-9298-ff623fde569f 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received event network-vif-unplugged-b903bb84-e176-4730-b223-613a9b01712b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.159 189497 DEBUG oslo_concurrency.lockutils [req-55d2dae8-a5ed-4886-bed3-4990fd13a247 req-c2cff718-1206-4ceb-9298-ff623fde569f 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.160 189497 DEBUG oslo_concurrency.lockutils [req-55d2dae8-a5ed-4886-bed3-4990fd13a247 req-c2cff718-1206-4ceb-9298-ff623fde569f 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.160 189497 DEBUG oslo_concurrency.lockutils [req-55d2dae8-a5ed-4886-bed3-4990fd13a247 req-c2cff718-1206-4ceb-9298-ff623fde569f 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.160 189497 DEBUG nova.compute.manager [req-55d2dae8-a5ed-4886-bed3-4990fd13a247 req-c2cff718-1206-4ceb-9298-ff623fde569f 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] No waiting events found dispatching network-vif-unplugged-b903bb84-e176-4730-b223-613a9b01712b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 09 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.161 189497 DEBUG nova.compute.manager [req-55d2dae8-a5ed-4886-bed3-4990fd13a247 req-c2cff718-1206-4ceb-9298-ff623fde569f 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received event network-vif-unplugged-b903bb84-e176-4730-b223-613a9b01712b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 09 11:08:07 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:07.371 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.372 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:07 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:07.374 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 09 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.485 189497 DEBUG nova.compute.manager [req-17ee5bf9-af9e-41e3-a35e-a023b367100c req-0e566dac-72cc-43be-a25f-d5a43cb101f5 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received event network-changed-b903bb84-e176-4730-b223-613a9b01712b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.486 189497 DEBUG nova.compute.manager [req-17ee5bf9-af9e-41e3-a35e-a023b367100c req-0e566dac-72cc-43be-a25f-d5a43cb101f5 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Refreshing instance network info cache due to event network-changed-b903bb84-e176-4730-b223-613a9b01712b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 09 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.486 189497 DEBUG oslo_concurrency.lockutils [req-17ee5bf9-af9e-41e3-a35e-a023b367100c req-0e566dac-72cc-43be-a25f-d5a43cb101f5 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 09 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.488 189497 DEBUG oslo_concurrency.lockutils [req-17ee5bf9-af9e-41e3-a35e-a023b367100c req-0e566dac-72cc-43be-a25f-d5a43cb101f5 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquired lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 09 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.489 189497 DEBUG nova.network.neutron [req-17ee5bf9-af9e-41e3-a35e-a023b367100c req-0e566dac-72cc-43be-a25f-d5a43cb101f5 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Refreshing network info cache for port b903bb84-e176-4730-b223-613a9b01712b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 09 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.677 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:07 compute-0 podman[249338]: 2025-12-09 11:08:07.981577228 +0000 UTC m=+0.121638464 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 09 11:08:07 compute-0 podman[249339]: 2025-12-09 11:08:07.981638459 +0000 UTC m=+0.119410865 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec 09 11:08:09 compute-0 nova_compute[189493]: 2025-12-09 11:08:09.280 189497 DEBUG nova.compute.manager [req-a384c5ae-5b38-46fa-9baa-6b28f0236e7b req-c5902d0f-fc0e-4241-8b93-b21c19f04f64 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received event network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 11:08:09 compute-0 nova_compute[189493]: 2025-12-09 11:08:09.281 189497 DEBUG oslo_concurrency.lockutils [req-a384c5ae-5b38-46fa-9baa-6b28f0236e7b req-c5902d0f-fc0e-4241-8b93-b21c19f04f64 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:08:09 compute-0 nova_compute[189493]: 2025-12-09 11:08:09.281 189497 DEBUG oslo_concurrency.lockutils [req-a384c5ae-5b38-46fa-9baa-6b28f0236e7b req-c5902d0f-fc0e-4241-8b93-b21c19f04f64 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:08:09 compute-0 nova_compute[189493]: 2025-12-09 11:08:09.282 189497 DEBUG oslo_concurrency.lockutils [req-a384c5ae-5b38-46fa-9baa-6b28f0236e7b req-c5902d0f-fc0e-4241-8b93-b21c19f04f64 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:08:09 compute-0 nova_compute[189493]: 2025-12-09 11:08:09.282 189497 DEBUG nova.compute.manager [req-a384c5ae-5b38-46fa-9baa-6b28f0236e7b req-c5902d0f-fc0e-4241-8b93-b21c19f04f64 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] No waiting events found dispatching network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 09 11:08:09 compute-0 nova_compute[189493]: 2025-12-09 11:08:09.282 189497 WARNING nova.compute.manager [req-a384c5ae-5b38-46fa-9baa-6b28f0236e7b req-c5902d0f-fc0e-4241-8b93-b21c19f04f64 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received unexpected event network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b for instance with vm_state active and task_state deleting.
Dec 09 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.348 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.565 189497 DEBUG nova.network.neutron [-] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.590 189497 INFO nova.compute.manager [-] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Took 5.17 seconds to deallocate network for instance.
Dec 09 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.628 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.629 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.736 189497 DEBUG nova.compute.provider_tree [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.754 189497 DEBUG nova.scheduler.client.report [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.781 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.808 189497 INFO nova.scheduler.client.report [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Deleted allocations for instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88
Dec 09 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.883 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:08:11 compute-0 nova_compute[189493]: 2025-12-09 11:08:11.052 189497 DEBUG nova.network.neutron [req-17ee5bf9-af9e-41e3-a35e-a023b367100c req-0e566dac-72cc-43be-a25f-d5a43cb101f5 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updated VIF entry in instance network info cache for port b903bb84-e176-4730-b223-613a9b01712b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 09 11:08:11 compute-0 nova_compute[189493]: 2025-12-09 11:08:11.054 189497 DEBUG nova.network.neutron [req-17ee5bf9-af9e-41e3-a35e-a023b367100c req-0e566dac-72cc-43be-a25f-d5a43cb101f5 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updating instance_info_cache with network_info: [{"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 11:08:11 compute-0 nova_compute[189493]: 2025-12-09 11:08:11.076 189497 DEBUG oslo_concurrency.lockutils [req-17ee5bf9-af9e-41e3-a35e-a023b367100c req-0e566dac-72cc-43be-a25f-d5a43cb101f5 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Releasing lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 09 11:08:12 compute-0 nova_compute[189493]: 2025-12-09 11:08:12.681 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:13 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:13.377 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:08:15 compute-0 nova_compute[189493]: 2025-12-09 11:08:15.352 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:15 compute-0 podman[249373]: 2025-12-09 11:08:15.967716935 +0000 UTC m=+0.129519480 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 09 11:08:16 compute-0 podman[249393]: 2025-12-09 11:08:16.132115024 +0000 UTC m=+0.103679036 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 11:08:16 compute-0 podman[249394]: 2025-12-09 11:08:16.213958829 +0000 UTC m=+0.178438766 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 11:08:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:17.005 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:08:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:17.006 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:08:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:17.006 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:08:17 compute-0 nova_compute[189493]: 2025-12-09 11:08:17.685 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:20 compute-0 nova_compute[189493]: 2025-12-09 11:08:20.304 189497 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765278485.302588, 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 11:08:20 compute-0 nova_compute[189493]: 2025-12-09 11:08:20.305 189497 INFO nova.compute.manager [-] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] VM Stopped (Lifecycle Event)
Dec 09 11:08:20 compute-0 nova_compute[189493]: 2025-12-09 11:08:20.347 189497 DEBUG nova.compute.manager [None req-98b04bb9-8a62-4342-9f22-5deab8d0b28a - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 11:08:20 compute-0 nova_compute[189493]: 2025-12-09 11:08:20.355 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:22 compute-0 nova_compute[189493]: 2025-12-09 11:08:22.688 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.298 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.299 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.314 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.314 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.315 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.315 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T11:08:23.315145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.321 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.321 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.322 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.322 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T11:08:23.322494) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.348 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.349 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.349 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.349 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.349 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.349 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.350 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.350 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.350 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.350 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.350 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.350 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.351 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.351 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.351 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.351 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.351 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.351 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.351 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.352 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T11:08:23.350095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.352 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T11:08:23.351127) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.352 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.353 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T11:08:23.352639) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.421 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.422 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.422 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.423 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.423 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.423 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.424 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.424 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.424 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.425 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.425 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T11:08:23.424453) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.425 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.425 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.426 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.426 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.426 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.426 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.427 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.427 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T11:08:23.426798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.427 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.428 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.428 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.428 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.428 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.429 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.429 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.429 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.430 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.430 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T11:08:23.429588) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.430 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.430 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.431 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.431 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.431 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.433 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.432 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T11:08:23.432441) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.433 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.433 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.434 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T11:08:23.435321) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.461 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 50530000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.462 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.462 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.462 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.463 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.463 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.463 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.464 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.464 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T11:08:23.463554) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.464 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.464 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.465 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.466 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.466 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.466 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.467 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T11:08:23.466665) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.467 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.468 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.468 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.468 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.469 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.470 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T11:08:23.469572) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.470 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.470 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.471 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.471 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.472 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.472 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.472 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T11:08:23.472370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.473 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.473 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.473 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.474 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.474 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.474 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.475 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.475 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T11:08:23.474617) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.475 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.475 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.476 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.476 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.476 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.477 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.477 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T11:08:23.476955) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.477 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.478 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.478 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.478 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.479 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.479 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.479 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.479 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.480 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.480 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T11:08:23.480011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.481 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.481 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.482 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T11:08:23.482306) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.483 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.483 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.483 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.483 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.484 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.484 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.484 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.484 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.485 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T11:08:23.484715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.485 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.485 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.486 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.486 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.486 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.487 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.487 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T11:08:23.487079) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.487 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.488 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.489 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.489 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.489 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.490 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.490 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T11:08:23.489541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.490 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.490 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.491 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.491 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.491 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.491 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.492 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.492 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T11:08:23.491731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.492 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.493 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.493 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.493 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.493 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.494 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.494 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.494 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T11:08:23.493938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.495 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.495 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.495 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.495 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.496 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.496 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.496 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.496 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T11:08:23.496194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.497 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.498 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.498 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.498 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.498 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.499 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.499 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.499 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.499 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.499 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.499 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.499 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.499 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.500 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.500 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.500 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.500 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.500 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.500 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.500 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.501 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.501 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.501 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.501 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.501 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.501 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.501 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:08:24 compute-0 sshd-session[249444]: Invalid user docker from 159.223.8.217 port 45742
Dec 09 11:08:24 compute-0 sshd-session[249444]: Connection closed by invalid user docker 159.223.8.217 port 45742 [preauth]
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.598 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.600 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.601 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.601 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.602 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.604 189497 INFO nova.compute.manager [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Terminating instance
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.607 189497 DEBUG nova.compute.manager [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 09 11:08:24 compute-0 kernel: tap2c684388-b6 (unregistering): left promiscuous mode
Dec 09 11:08:24 compute-0 NetworkManager[56302]: <info>  [1765278504.6700] device (tap2c684388-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.677 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:24 compute-0 ovn_controller[97780]: 2025-12-09T11:08:24Z|00061|binding|INFO|Releasing lport 2c684388-b6d9-4de0-8691-29807fabed2c from this chassis (sb_readonly=0)
Dec 09 11:08:24 compute-0 ovn_controller[97780]: 2025-12-09T11:08:24Z|00062|binding|INFO|Setting lport 2c684388-b6d9-4de0-8691-29807fabed2c down in Southbound
Dec 09 11:08:24 compute-0 ovn_controller[97780]: 2025-12-09T11:08:24Z|00063|binding|INFO|Removing iface tap2c684388-b6 ovn-installed in OVS
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.682 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:24.686 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:65:39 192.168.0.250'], port_security=['fa:16:3e:c7:65:39 192.168.0.250'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.250/24', 'neutron:device_id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5af7354-5afe-400a-9e13-5500648117d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd86dfae4-cfd5-480d-a50e-0084326b1439', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61df917c-633f-4b35-857d-39fd859caf35, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], logical_port=2c684388-b6d9-4de0-8691-29807fabed2c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 11:08:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:24.689 106644 INFO neutron.agent.ovn.metadata.agent [-] Port 2c684388-b6d9-4de0-8691-29807fabed2c in datapath c5af7354-5afe-400a-9e13-5500648117d8 unbound from our chassis
Dec 09 11:08:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:24.691 106644 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c5af7354-5afe-400a-9e13-5500648117d8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 09 11:08:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:24.693 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1da532-7239-4e05-ad57-bb2ddea6fffa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:08:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:24.695 106644 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8 namespace which is not needed anymore
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.712 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:24 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec 09 11:08:24 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 27.253s CPU time.
Dec 09 11:08:24 compute-0 systemd-machined[155790]: Machine qemu-1-instance-00000001 terminated.
Dec 09 11:08:24 compute-0 kernel: tap2c684388-b6: entered promiscuous mode
Dec 09 11:08:24 compute-0 kernel: tap2c684388-b6 (unregistering): left promiscuous mode
Dec 09 11:08:24 compute-0 NetworkManager[56302]: <info>  [1765278504.8402] manager: (tap2c684388-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.849 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.905 189497 DEBUG nova.compute.manager [req-3ddb9c25-dfbf-4fb9-8e3b-54cae939a9ec req-0697aeb9-c748-454c-b7a2-1b63fa424068 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received event network-vif-unplugged-2c684388-b6d9-4de0-8691-29807fabed2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.906 189497 DEBUG oslo_concurrency.lockutils [req-3ddb9c25-dfbf-4fb9-8e3b-54cae939a9ec req-0697aeb9-c748-454c-b7a2-1b63fa424068 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.906 189497 DEBUG oslo_concurrency.lockutils [req-3ddb9c25-dfbf-4fb9-8e3b-54cae939a9ec req-0697aeb9-c748-454c-b7a2-1b63fa424068 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.906 189497 DEBUG oslo_concurrency.lockutils [req-3ddb9c25-dfbf-4fb9-8e3b-54cae939a9ec req-0697aeb9-c748-454c-b7a2-1b63fa424068 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.906 189497 DEBUG nova.compute.manager [req-3ddb9c25-dfbf-4fb9-8e3b-54cae939a9ec req-0697aeb9-c748-454c-b7a2-1b63fa424068 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] No waiting events found dispatching network-vif-unplugged-2c684388-b6d9-4de0-8691-29807fabed2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.907 189497 DEBUG nova.compute.manager [req-3ddb9c25-dfbf-4fb9-8e3b-54cae939a9ec req-0697aeb9-c748-454c-b7a2-1b63fa424068 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received event network-vif-unplugged-2c684388-b6d9-4de0-8691-29807fabed2c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.908 189497 INFO nova.virt.libvirt.driver [-] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Instance destroyed successfully.
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.908 189497 DEBUG nova.objects.instance [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'resources' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.929 189497 DEBUG nova.virt.libvirt.vif [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-09T10:48:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-09T10:48:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-o83aar8e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-09T10:48:53Z,user_data=None,user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.929 189497 DEBUG nova.network.os_vif_util [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.930 189497 DEBUG nova.network.os_vif_util [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:65:39,bridge_name='br-int',has_traffic_filtering=True,id=2c684388-b6d9-4de0-8691-29807fabed2c,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c684388-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.930 189497 DEBUG os_vif [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:65:39,bridge_name='br-int',has_traffic_filtering=True,id=2c684388-b6d9-4de0-8691-29807fabed2c,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c684388-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.931 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.932 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c684388-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.934 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.935 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 09 11:08:24 compute-0 neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8[240047]: [NOTICE]   (240052) : haproxy version is 2.8.14-c23fe91
Dec 09 11:08:24 compute-0 neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8[240047]: [NOTICE]   (240052) : path to executable is /usr/sbin/haproxy
Dec 09 11:08:24 compute-0 neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8[240047]: [WARNING]  (240052) : Exiting Master process...
Dec 09 11:08:24 compute-0 neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8[240047]: [WARNING]  (240052) : Exiting Master process...
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.939 189497 INFO os_vif [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:65:39,bridge_name='br-int',has_traffic_filtering=True,id=2c684388-b6d9-4de0-8691-29807fabed2c,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c684388-b6')
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.939 189497 INFO nova.virt.libvirt.driver [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Deleting instance files /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f_del
Dec 09 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.940 189497 INFO nova.virt.libvirt.driver [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Deletion of /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f_del complete
Dec 09 11:08:24 compute-0 neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8[240047]: [ALERT]    (240052) : Current worker (240054) exited with code 143 (Terminated)
Dec 09 11:08:24 compute-0 neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8[240047]: [WARNING]  (240052) : All workers exited. Exiting... (0)
Dec 09 11:08:24 compute-0 systemd[1]: libpod-c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157.scope: Deactivated successfully.
Dec 09 11:08:24 compute-0 podman[249476]: 2025-12-09 11:08:24.950044327 +0000 UTC m=+0.079413782 container died c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 09 11:08:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157-userdata-shm.mount: Deactivated successfully.
Dec 09 11:08:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b24b7ae1cc8b90219deedb86d3b48361a8607a5826e7fa3b48e4b1d97a56504-merged.mount: Deactivated successfully.
Dec 09 11:08:25 compute-0 nova_compute[189493]: 2025-12-09 11:08:25.012 189497 INFO nova.compute.manager [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Took 0.40 seconds to destroy the instance on the hypervisor.
Dec 09 11:08:25 compute-0 nova_compute[189493]: 2025-12-09 11:08:25.013 189497 DEBUG oslo.service.loopingcall [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 09 11:08:25 compute-0 nova_compute[189493]: 2025-12-09 11:08:25.013 189497 DEBUG nova.compute.manager [-] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 09 11:08:25 compute-0 nova_compute[189493]: 2025-12-09 11:08:25.013 189497 DEBUG nova.network.neutron [-] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 09 11:08:25 compute-0 podman[249476]: 2025-12-09 11:08:25.02030671 +0000 UTC m=+0.149676165 container cleanup c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 09 11:08:25 compute-0 systemd[1]: libpod-conmon-c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157.scope: Deactivated successfully.
Dec 09 11:08:25 compute-0 podman[249516]: 2025-12-09 11:08:25.11419229 +0000 UTC m=+0.061756752 container remove c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 09 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.127 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[ed28aa47-4a02-47df-92a6-e94a6111ce73]: (4, ('Tue Dec  9 11:08:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8 (c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157)\nc6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157\nTue Dec  9 11:08:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8 (c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157)\nc6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.130 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[c0069371-1547-4ab3-ba53-80fd97020efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.131 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5af7354-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:08:25 compute-0 nova_compute[189493]: 2025-12-09 11:08:25.133 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:25 compute-0 kernel: tapc5af7354-50: left promiscuous mode
Dec 09 11:08:25 compute-0 nova_compute[189493]: 2025-12-09 11:08:25.137 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.140 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[ba89abbe-7a4a-45bd-a0ef-7def6eda965d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:08:25 compute-0 nova_compute[189493]: 2025-12-09 11:08:25.154 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.165 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[91d13395-43e4-4c0c-9a18-b696bc7c075c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.167 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[fe621261-ee73-40f6-a2fd-ce5816e0e618]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.189 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[ba133564-7547-4ba7-b2e4-939e38cf9e85]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396015, 'reachable_time': 31644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249531, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:08:25 compute-0 systemd[1]: run-netns-ovnmeta\x2dc5af7354\x2d5afe\x2d400a\x2d9e13\x2d5500648117d8.mount: Deactivated successfully.
Dec 09 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.208 106757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 09 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.210 106757 DEBUG oslo.privsep.daemon [-] privsep: reply[7af3d7c3-b58f-4516-a999-e92749772f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 09 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.058 189497 DEBUG nova.network.neutron [-] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 09 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.081 189497 INFO nova.compute.manager [-] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Took 1.07 seconds to deallocate network for instance.
Dec 09 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.129 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.129 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.212 189497 DEBUG nova.compute.provider_tree [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.232 189497 DEBUG nova.scheduler.client.report [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.258 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.284 189497 INFO nova.scheduler.client.report [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Deleted allocations for instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f
Dec 09 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.352 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.995 189497 DEBUG nova.compute.manager [req-9692f678-dfca-40a8-b3e5-e7dac6c8ee30 req-da9c7a50-4f8c-4abb-b77f-ff227deb711d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received event network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.996 189497 DEBUG oslo_concurrency.lockutils [req-9692f678-dfca-40a8-b3e5-e7dac6c8ee30 req-da9c7a50-4f8c-4abb-b77f-ff227deb711d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.996 189497 DEBUG oslo_concurrency.lockutils [req-9692f678-dfca-40a8-b3e5-e7dac6c8ee30 req-da9c7a50-4f8c-4abb-b77f-ff227deb711d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.997 189497 DEBUG oslo_concurrency.lockutils [req-9692f678-dfca-40a8-b3e5-e7dac6c8ee30 req-da9c7a50-4f8c-4abb-b77f-ff227deb711d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.997 189497 DEBUG nova.compute.manager [req-9692f678-dfca-40a8-b3e5-e7dac6c8ee30 req-da9c7a50-4f8c-4abb-b77f-ff227deb711d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] No waiting events found dispatching network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 09 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.998 189497 WARNING nova.compute.manager [req-9692f678-dfca-40a8-b3e5-e7dac6c8ee30 req-da9c7a50-4f8c-4abb-b77f-ff227deb711d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received unexpected event network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c for instance with vm_state deleted and task_state None.
Dec 09 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.998 189497 DEBUG nova.compute.manager [req-9692f678-dfca-40a8-b3e5-e7dac6c8ee30 req-da9c7a50-4f8c-4abb-b77f-ff227deb711d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received event network-vif-deleted-2c684388-b6d9-4de0-8691-29807fabed2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 09 11:08:27 compute-0 nova_compute[189493]: 2025-12-09 11:08:27.692 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:27 compute-0 podman[249533]: 2025-12-09 11:08:27.980004963 +0000 UTC m=+0.123721808 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251202)
Dec 09 11:08:29 compute-0 podman[203687]: time="2025-12-09T11:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:08:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 11:08:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4334 "" "Go-http-client/1.1"
Dec 09 11:08:29 compute-0 nova_compute[189493]: 2025-12-09 11:08:29.936 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:31 compute-0 openstack_network_exporter[205823]: ERROR   11:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:08:31 compute-0 openstack_network_exporter[205823]: ERROR   11:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:08:31 compute-0 openstack_network_exporter[205823]: ERROR   11:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:08:31 compute-0 openstack_network_exporter[205823]: ERROR   11:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:08:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:08:31 compute-0 openstack_network_exporter[205823]: ERROR   11:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:08:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:08:31 compute-0 podman[249552]: 2025-12-09 11:08:31.960037163 +0000 UTC m=+0.107593589 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 11:08:32 compute-0 nova_compute[189493]: 2025-12-09 11:08:32.694 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:34 compute-0 nova_compute[189493]: 2025-12-09 11:08:34.940 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:35 compute-0 podman[249578]: 2025-12-09 11:08:35.938676408 +0000 UTC m=+0.088984283 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 09 11:08:35 compute-0 podman[249577]: 2025-12-09 11:08:35.950362842 +0000 UTC m=+0.101900908 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, release=1214.1726694543, maintainer=Red Hat, Inc., release-0.7.12=, config_id=edpm)
Dec 09 11:08:37 compute-0 nova_compute[189493]: 2025-12-09 11:08:37.696 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:38 compute-0 nova_compute[189493]: 2025-12-09 11:08:38.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:08:38 compute-0 nova_compute[189493]: 2025-12-09 11:08:38.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:08:38 compute-0 nova_compute[189493]: 2025-12-09 11:08:38.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:08:38 compute-0 podman[249613]: 2025-12-09 11:08:38.917891228 +0000 UTC m=+0.068558060 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 11:08:38 compute-0 podman[249614]: 2025-12-09 11:08:38.942146761 +0000 UTC m=+0.091584502 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec 09 11:08:39 compute-0 nova_compute[189493]: 2025-12-09 11:08:39.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:08:39 compute-0 nova_compute[189493]: 2025-12-09 11:08:39.899 189497 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765278504.8977005, 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 09 11:08:39 compute-0 nova_compute[189493]: 2025-12-09 11:08:39.899 189497 INFO nova.compute.manager [-] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] VM Stopped (Lifecycle Event)
Dec 09 11:08:39 compute-0 nova_compute[189493]: 2025-12-09 11:08:39.944 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:39 compute-0 nova_compute[189493]: 2025-12-09 11:08:39.964 189497 DEBUG nova.compute.manager [None req-e0a859fd-beb7-4372-a98b-92e2d985e9fc - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 09 11:08:42 compute-0 nova_compute[189493]: 2025-12-09 11:08:42.700 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:44 compute-0 nova_compute[189493]: 2025-12-09 11:08:44.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:08:44 compute-0 nova_compute[189493]: 2025-12-09 11:08:44.949 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:45 compute-0 nova_compute[189493]: 2025-12-09 11:08:45.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:08:45 compute-0 nova_compute[189493]: 2025-12-09 11:08:45.889 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:08:45 compute-0 nova_compute[189493]: 2025-12-09 11:08:45.889 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:08:45 compute-0 nova_compute[189493]: 2025-12-09 11:08:45.889 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:08:45 compute-0 nova_compute[189493]: 2025-12-09 11:08:45.890 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.252 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.254 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5363MB free_disk=72.17672348022461GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.255 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.255 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.566 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.568 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.599 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.618 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.646 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.647 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:08:46 compute-0 podman[249651]: 2025-12-09 11:08:46.970610935 +0000 UTC m=+0.107043923 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 11:08:47 compute-0 podman[249650]: 2025-12-09 11:08:47.006159263 +0000 UTC m=+0.139720746 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.6, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container)
Dec 09 11:08:47 compute-0 podman[249652]: 2025-12-09 11:08:47.020686592 +0000 UTC m=+0.150535769 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Dec 09 11:08:47 compute-0 nova_compute[189493]: 2025-12-09 11:08:47.649 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:08:47 compute-0 nova_compute[189493]: 2025-12-09 11:08:47.650 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 11:08:47 compute-0 nova_compute[189493]: 2025-12-09 11:08:47.678 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 11:08:47 compute-0 nova_compute[189493]: 2025-12-09 11:08:47.706 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:47 compute-0 nova_compute[189493]: 2025-12-09 11:08:47.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:08:49 compute-0 nova_compute[189493]: 2025-12-09 11:08:49.953 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:52 compute-0 nova_compute[189493]: 2025-12-09 11:08:52.707 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:52 compute-0 nova_compute[189493]: 2025-12-09 11:08:52.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:08:52 compute-0 nova_compute[189493]: 2025-12-09 11:08:52.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 11:08:53 compute-0 sshd-session[249720]: Invalid user docker from 159.223.8.217 port 43932
Dec 09 11:08:53 compute-0 sshd-session[249720]: Connection closed by invalid user docker 159.223.8.217 port 43932 [preauth]
Dec 09 11:08:54 compute-0 nova_compute[189493]: 2025-12-09 11:08:54.958 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:55 compute-0 ovn_controller[97780]: 2025-12-09T11:08:55Z|00064|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Dec 09 11:08:57 compute-0 nova_compute[189493]: 2025-12-09 11:08:57.710 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:08:59 compute-0 podman[249722]: 2025-12-09 11:08:59.025270263 +0000 UTC m=+0.164194595 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 11:08:59 compute-0 podman[203687]: time="2025-12-09T11:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:08:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 11:08:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4338 "" "Go-http-client/1.1"
Dec 09 11:08:59 compute-0 nova_compute[189493]: 2025-12-09 11:08:59.964 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:01 compute-0 openstack_network_exporter[205823]: ERROR   11:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:09:01 compute-0 openstack_network_exporter[205823]: ERROR   11:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:09:01 compute-0 openstack_network_exporter[205823]: ERROR   11:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:09:01 compute-0 openstack_network_exporter[205823]: ERROR   11:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:09:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:09:01 compute-0 openstack_network_exporter[205823]: ERROR   11:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:09:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:09:02 compute-0 nova_compute[189493]: 2025-12-09 11:09:02.713 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:02 compute-0 podman[249742]: 2025-12-09 11:09:02.961191622 +0000 UTC m=+0.105630657 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 09 11:09:04 compute-0 nova_compute[189493]: 2025-12-09 11:09:04.974 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:06 compute-0 podman[249765]: 2025-12-09 11:09:06.984420529 +0000 UTC m=+0.123271247 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, container_name=kepler, release-0.7.12=, vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64)
Dec 09 11:09:06 compute-0 podman[249766]: 2025-12-09 11:09:06.989563604 +0000 UTC m=+0.122423906 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 09 11:09:07 compute-0 nova_compute[189493]: 2025-12-09 11:09:07.719 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:09 compute-0 podman[249801]: 2025-12-09 11:09:09.941354229 +0000 UTC m=+0.092638288 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Dec 09 11:09:09 compute-0 podman[249802]: 2025-12-09 11:09:09.961127525 +0000 UTC m=+0.097252569 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 09 11:09:09 compute-0 nova_compute[189493]: 2025-12-09 11:09:09.980 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:12 compute-0 nova_compute[189493]: 2025-12-09 11:09:12.721 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:14 compute-0 nova_compute[189493]: 2025-12-09 11:09:14.985 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:09:17.006 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:09:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:09:17.007 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:09:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:09:17.007 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:09:17 compute-0 nova_compute[189493]: 2025-12-09 11:09:17.726 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:17 compute-0 podman[249838]: 2025-12-09 11:09:17.995984205 +0000 UTC m=+0.140832365 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, architecture=x86_64, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 09 11:09:17 compute-0 podman[249839]: 2025-12-09 11:09:17.997643109 +0000 UTC m=+0.144560343 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 11:09:18 compute-0 podman[249840]: 2025-12-09 11:09:18.023509513 +0000 UTC m=+0.152443718 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 11:09:19 compute-0 nova_compute[189493]: 2025-12-09 11:09:19.990 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:22 compute-0 nova_compute[189493]: 2025-12-09 11:09:22.731 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:22 compute-0 sshd-session[249906]: Invalid user docker from 159.223.8.217 port 43484
Dec 09 11:09:22 compute-0 sshd-session[249906]: Connection closed by invalid user docker 159.223.8.217 port 43484 [preauth]
Dec 09 11:09:24 compute-0 nova_compute[189493]: 2025-12-09 11:09:24.995 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:27 compute-0 nova_compute[189493]: 2025-12-09 11:09:27.745 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:29 compute-0 podman[203687]: time="2025-12-09T11:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:09:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 11:09:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4332 "" "Go-http-client/1.1"
Dec 09 11:09:30 compute-0 podman[249908]: 2025-12-09 11:09:30.000227277 +0000 UTC m=+0.152203641 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 11:09:30 compute-0 nova_compute[189493]: 2025-12-09 11:09:30.002 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:31 compute-0 openstack_network_exporter[205823]: ERROR   11:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:09:31 compute-0 openstack_network_exporter[205823]: ERROR   11:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:09:31 compute-0 openstack_network_exporter[205823]: ERROR   11:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:09:31 compute-0 openstack_network_exporter[205823]: ERROR   11:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:09:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:09:31 compute-0 openstack_network_exporter[205823]: ERROR   11:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:09:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:09:32 compute-0 nova_compute[189493]: 2025-12-09 11:09:32.747 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:33 compute-0 podman[249927]: 2025-12-09 11:09:33.976689864 +0000 UTC m=+0.116211453 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 11:09:35 compute-0 nova_compute[189493]: 2025-12-09 11:09:35.011 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:37 compute-0 nova_compute[189493]: 2025-12-09 11:09:37.749 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:37 compute-0 podman[249951]: 2025-12-09 11:09:37.974433166 +0000 UTC m=+0.114092538 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, vendor=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, name=ubi9, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, container_name=kepler, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 09 11:09:38 compute-0 podman[249952]: 2025-12-09 11:09:38.015415524 +0000 UTC m=+0.147062907 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 09 11:09:38 compute-0 nova_compute[189493]: 2025-12-09 11:09:38.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:09:40 compute-0 nova_compute[189493]: 2025-12-09 11:09:40.016 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:40 compute-0 nova_compute[189493]: 2025-12-09 11:09:40.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:09:40 compute-0 nova_compute[189493]: 2025-12-09 11:09:40.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:09:40 compute-0 podman[249988]: 2025-12-09 11:09:40.942444543 +0000 UTC m=+0.092129046 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 09 11:09:40 compute-0 podman[249989]: 2025-12-09 11:09:40.949237381 +0000 UTC m=+0.098757530 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 11:09:41 compute-0 nova_compute[189493]: 2025-12-09 11:09:41.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:09:41 compute-0 nova_compute[189493]: 2025-12-09 11:09:41.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:09:42 compute-0 nova_compute[189493]: 2025-12-09 11:09:42.752 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:44 compute-0 nova_compute[189493]: 2025-12-09 11:09:44.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:09:45 compute-0 nova_compute[189493]: 2025-12-09 11:09:45.500 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:45 compute-0 nova_compute[189493]: 2025-12-09 11:09:45.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:09:45 compute-0 nova_compute[189493]: 2025-12-09 11:09:45.876 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:09:45 compute-0 nova_compute[189493]: 2025-12-09 11:09:45.877 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:09:45 compute-0 nova_compute[189493]: 2025-12-09 11:09:45.877 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:09:45 compute-0 nova_compute[189493]: 2025-12-09 11:09:45.877 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.210 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.211 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5364MB free_disk=72.17672348022461GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.211 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.212 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.650 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.650 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.783 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing inventories for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 09 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.900 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating ProviderTree inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 09 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.901 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 09 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.918 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing aggregate associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 09 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.947 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing trait associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX2,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 09 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.970 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.985 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.986 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.986 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:09:47 compute-0 nova_compute[189493]: 2025-12-09 11:09:47.757 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:48 compute-0 nova_compute[189493]: 2025-12-09 11:09:48.533 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:09:48 compute-0 nova_compute[189493]: 2025-12-09 11:09:48.534 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:09:48 compute-0 nova_compute[189493]: 2025-12-09 11:09:48.849 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:09:48 compute-0 nova_compute[189493]: 2025-12-09 11:09:48.849 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 11:09:48 compute-0 nova_compute[189493]: 2025-12-09 11:09:48.850 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 11:09:48 compute-0 nova_compute[189493]: 2025-12-09 11:09:48.869 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 11:09:48 compute-0 podman[250026]: 2025-12-09 11:09:48.965229357 +0000 UTC m=+0.105658513 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, config_id=edpm, architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git)
Dec 09 11:09:48 compute-0 podman[250027]: 2025-12-09 11:09:48.988099963 +0000 UTC m=+0.120544938 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 11:09:49 compute-0 podman[250028]: 2025-12-09 11:09:48.999954167 +0000 UTC m=+0.127327467 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 09 11:09:50 compute-0 nova_compute[189493]: 2025-12-09 11:09:50.503 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:52 compute-0 sshd-session[250097]: Invalid user docker from 159.223.8.217 port 44038
Dec 09 11:09:52 compute-0 sshd-session[250097]: Connection closed by invalid user docker 159.223.8.217 port 44038 [preauth]
Dec 09 11:09:52 compute-0 nova_compute[189493]: 2025-12-09 11:09:52.761 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:52 compute-0 nova_compute[189493]: 2025-12-09 11:09:52.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:09:52 compute-0 nova_compute[189493]: 2025-12-09 11:09:52.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 09 11:09:54 compute-0 nova_compute[189493]: 2025-12-09 11:09:54.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:09:54 compute-0 nova_compute[189493]: 2025-12-09 11:09:54.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 11:09:54 compute-0 nova_compute[189493]: 2025-12-09 11:09:54.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:09:55 compute-0 nova_compute[189493]: 2025-12-09 11:09:55.506 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:56 compute-0 nova_compute[189493]: 2025-12-09 11:09:56.636 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:09:57 compute-0 nova_compute[189493]: 2025-12-09 11:09:57.763 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:09:59 compute-0 podman[203687]: time="2025-12-09T11:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:09:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 11:09:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4333 "" "Go-http-client/1.1"
Dec 09 11:10:00 compute-0 nova_compute[189493]: 2025-12-09 11:10:00.510 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:01 compute-0 podman[250099]: 2025-12-09 11:10:01.010311952 +0000 UTC m=+0.152052684 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 09 11:10:01 compute-0 openstack_network_exporter[205823]: ERROR   11:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:10:01 compute-0 openstack_network_exporter[205823]: ERROR   11:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:10:01 compute-0 openstack_network_exporter[205823]: ERROR   11:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:10:01 compute-0 openstack_network_exporter[205823]: ERROR   11:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:10:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:10:01 compute-0 openstack_network_exporter[205823]: ERROR   11:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:10:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:10:02 compute-0 nova_compute[189493]: 2025-12-09 11:10:02.765 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:04 compute-0 podman[250117]: 2025-12-09 11:10:04.956448587 +0000 UTC m=+0.102737965 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 09 11:10:05 compute-0 nova_compute[189493]: 2025-12-09 11:10:05.512 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:07 compute-0 nova_compute[189493]: 2025-12-09 11:10:07.767 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:08 compute-0 podman[250141]: 2025-12-09 11:10:08.96960544 +0000 UTC m=+0.110462041 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi)
Dec 09 11:10:08 compute-0 podman[250140]: 2025-12-09 11:10:08.987243277 +0000 UTC m=+0.133345376 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=base rhel9, container_name=kepler, distribution-scope=public, name=ubi9, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30, version=9.4)
Dec 09 11:10:10 compute-0 nova_compute[189493]: 2025-12-09 11:10:10.515 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:11 compute-0 podman[250176]: 2025-12-09 11:10:11.977407872 +0000 UTC m=+0.119765657 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 09 11:10:11 compute-0 podman[250177]: 2025-12-09 11:10:11.995617914 +0000 UTC m=+0.130015208 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 09 11:10:12 compute-0 nova_compute[189493]: 2025-12-09 11:10:12.770 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:13 compute-0 nova_compute[189493]: 2025-12-09 11:10:13.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:10:13 compute-0 nova_compute[189493]: 2025-12-09 11:10:13.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 09 11:10:13 compute-0 nova_compute[189493]: 2025-12-09 11:10:13.860 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 09 11:10:15 compute-0 nova_compute[189493]: 2025-12-09 11:10:15.518 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:10:17.008 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:10:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:10:17.009 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:10:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:10:17.010 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:10:17 compute-0 nova_compute[189493]: 2025-12-09 11:10:17.774 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:19 compute-0 sshd-session[250213]: Invalid user user from 45.148.10.121 port 49424
Dec 09 11:10:19 compute-0 sshd-session[250213]: Connection closed by invalid user user 45.148.10.121 port 49424 [preauth]
Dec 09 11:10:19 compute-0 podman[250216]: 2025-12-09 11:10:19.573675052 +0000 UTC m=+0.104737498 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 11:10:19 compute-0 podman[250215]: 2025-12-09 11:10:19.57399563 +0000 UTC m=+0.113793608 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 09 11:10:19 compute-0 podman[250217]: 2025-12-09 11:10:19.602908068 +0000 UTC m=+0.122837829 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 09 11:10:20 compute-0 nova_compute[189493]: 2025-12-09 11:10:20.521 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:22 compute-0 sshd-session[250284]: Invalid user docker from 159.223.8.217 port 33862
Dec 09 11:10:22 compute-0 sshd-session[250284]: Connection closed by invalid user docker 159.223.8.217 port 33862 [preauth]
Dec 09 11:10:22 compute-0 nova_compute[189493]: 2025-12-09 11:10:22.778 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.299 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.300 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.310 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.310 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.311 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.313 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.313 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.313 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.313 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.315 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:10:25 compute-0 nova_compute[189493]: 2025-12-09 11:10:25.525 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:27 compute-0 nova_compute[189493]: 2025-12-09 11:10:27.781 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:29 compute-0 podman[203687]: time="2025-12-09T11:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:10:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 11:10:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Dec 09 11:10:30 compute-0 nova_compute[189493]: 2025-12-09 11:10:30.527 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:31 compute-0 openstack_network_exporter[205823]: ERROR   11:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:10:31 compute-0 openstack_network_exporter[205823]: ERROR   11:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:10:31 compute-0 openstack_network_exporter[205823]: ERROR   11:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:10:31 compute-0 openstack_network_exporter[205823]: ERROR   11:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:10:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:10:31 compute-0 openstack_network_exporter[205823]: ERROR   11:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:10:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:10:31 compute-0 podman[250287]: 2025-12-09 11:10:31.988123263 +0000 UTC m=+0.134726583 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Dec 09 11:10:32 compute-0 nova_compute[189493]: 2025-12-09 11:10:32.787 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:35 compute-0 nova_compute[189493]: 2025-12-09 11:10:35.530 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:35 compute-0 podman[250307]: 2025-12-09 11:10:35.950248773 +0000 UTC m=+0.084172973 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 09 11:10:37 compute-0 nova_compute[189493]: 2025-12-09 11:10:37.790 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:38 compute-0 nova_compute[189493]: 2025-12-09 11:10:38.861 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:10:39 compute-0 podman[250331]: 2025-12-09 11:10:39.950144015 +0000 UTC m=+0.092321680 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Dec 09 11:10:39 compute-0 podman[250330]: 2025-12-09 11:10:39.980724756 +0000 UTC m=+0.120432855 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc.)
Dec 09 11:10:40 compute-0 nova_compute[189493]: 2025-12-09 11:10:40.535 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:40 compute-0 nova_compute[189493]: 2025-12-09 11:10:40.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:10:41 compute-0 nova_compute[189493]: 2025-12-09 11:10:41.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:10:42 compute-0 nova_compute[189493]: 2025-12-09 11:10:42.793 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:42 compute-0 nova_compute[189493]: 2025-12-09 11:10:42.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:10:42 compute-0 podman[250366]: 2025-12-09 11:10:42.960093113 +0000 UTC m=+0.104007950 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 09 11:10:42 compute-0 podman[250367]: 2025-12-09 11:10:42.987732596 +0000 UTC m=+0.127579084 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec 09 11:10:45 compute-0 nova_compute[189493]: 2025-12-09 11:10:45.539 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:45 compute-0 nova_compute[189493]: 2025-12-09 11:10:45.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:10:47 compute-0 nova_compute[189493]: 2025-12-09 11:10:47.795 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:47 compute-0 nova_compute[189493]: 2025-12-09 11:10:47.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:10:47 compute-0 nova_compute[189493]: 2025-12-09 11:10:47.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:10:48 compute-0 nova_compute[189493]: 2025-12-09 11:10:48.082 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:10:48 compute-0 nova_compute[189493]: 2025-12-09 11:10:48.083 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:10:48 compute-0 nova_compute[189493]: 2025-12-09 11:10:48.084 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:10:48 compute-0 nova_compute[189493]: 2025-12-09 11:10:48.085 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 11:10:48 compute-0 nova_compute[189493]: 2025-12-09 11:10:48.518 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 11:10:48 compute-0 nova_compute[189493]: 2025-12-09 11:10:48.519 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5370MB free_disk=72.17672348022461GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 11:10:48 compute-0 nova_compute[189493]: 2025-12-09 11:10:48.519 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:10:48 compute-0 nova_compute[189493]: 2025-12-09 11:10:48.519 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:10:49 compute-0 nova_compute[189493]: 2025-12-09 11:10:49.435 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 11:10:49 compute-0 nova_compute[189493]: 2025-12-09 11:10:49.435 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 11:10:49 compute-0 nova_compute[189493]: 2025-12-09 11:10:49.573 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:10:49 compute-0 nova_compute[189493]: 2025-12-09 11:10:49.615 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:10:49 compute-0 nova_compute[189493]: 2025-12-09 11:10:49.617 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 11:10:49 compute-0 nova_compute[189493]: 2025-12-09 11:10:49.617 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:10:49 compute-0 podman[250407]: 2025-12-09 11:10:49.98417487 +0000 UTC m=+0.124052251 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 11:10:49 compute-0 podman[250406]: 2025-12-09 11:10:49.987730424 +0000 UTC m=+0.142065818 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6)
Dec 09 11:10:50 compute-0 podman[250408]: 2025-12-09 11:10:50.070625033 +0000 UTC m=+0.206349684 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 11:10:50 compute-0 nova_compute[189493]: 2025-12-09 11:10:50.541 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:51 compute-0 nova_compute[189493]: 2025-12-09 11:10:51.617 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:10:51 compute-0 nova_compute[189493]: 2025-12-09 11:10:51.618 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 11:10:51 compute-0 nova_compute[189493]: 2025-12-09 11:10:51.618 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 11:10:51 compute-0 nova_compute[189493]: 2025-12-09 11:10:51.696 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 11:10:52 compute-0 sshd-session[250470]: Invalid user dspace from 159.223.8.217 port 48050
Dec 09 11:10:52 compute-0 sshd-session[250470]: Connection closed by invalid user dspace 159.223.8.217 port 48050 [preauth]
Dec 09 11:10:52 compute-0 nova_compute[189493]: 2025-12-09 11:10:52.799 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:55 compute-0 nova_compute[189493]: 2025-12-09 11:10:55.545 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:55 compute-0 nova_compute[189493]: 2025-12-09 11:10:55.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:10:55 compute-0 nova_compute[189493]: 2025-12-09 11:10:55.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 11:10:57 compute-0 nova_compute[189493]: 2025-12-09 11:10:57.801 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:10:59 compute-0 podman[203687]: time="2025-12-09T11:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:10:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 11:10:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Dec 09 11:11:00 compute-0 nova_compute[189493]: 2025-12-09 11:11:00.548 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:01 compute-0 anacron[30864]: Job `cron.weekly' started
Dec 09 11:11:01 compute-0 anacron[30864]: Job `cron.weekly' terminated
Dec 09 11:11:01 compute-0 openstack_network_exporter[205823]: ERROR   11:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:11:01 compute-0 openstack_network_exporter[205823]: ERROR   11:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:11:01 compute-0 openstack_network_exporter[205823]: ERROR   11:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:11:01 compute-0 openstack_network_exporter[205823]: ERROR   11:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:11:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:11:01 compute-0 openstack_network_exporter[205823]: ERROR   11:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:11:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:11:02 compute-0 nova_compute[189493]: 2025-12-09 11:11:02.804 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:02 compute-0 podman[250474]: 2025-12-09 11:11:02.968294888 +0000 UTC m=+0.114662252 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 09 11:11:05 compute-0 nova_compute[189493]: 2025-12-09 11:11:05.553 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:06 compute-0 podman[250494]: 2025-12-09 11:11:06.961045519 +0000 UTC m=+0.114566260 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 09 11:11:07 compute-0 nova_compute[189493]: 2025-12-09 11:11:07.812 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:10 compute-0 nova_compute[189493]: 2025-12-09 11:11:10.557 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:10 compute-0 podman[250517]: 2025-12-09 11:11:10.954432087 +0000 UTC m=+0.100957229 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 11:11:10 compute-0 podman[250516]: 2025-12-09 11:11:10.993678748 +0000 UTC m=+0.146273480 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release-0.7.12=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, io.openshift.expose-services=)
Dec 09 11:11:12 compute-0 nova_compute[189493]: 2025-12-09 11:11:12.814 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:13 compute-0 podman[250552]: 2025-12-09 11:11:13.955367908 +0000 UTC m=+0.089955338 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 09 11:11:13 compute-0 podman[250553]: 2025-12-09 11:11:13.982334312 +0000 UTC m=+0.108325653 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 09 11:11:15 compute-0 nova_compute[189493]: 2025-12-09 11:11:15.561 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:11:17.010 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:11:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:11:17.011 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:11:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:11:17.012 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:11:17 compute-0 nova_compute[189493]: 2025-12-09 11:11:17.816 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:20 compute-0 nova_compute[189493]: 2025-12-09 11:11:20.563 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:20 compute-0 podman[250591]: 2025-12-09 11:11:20.931999877 +0000 UTC m=+0.077349443 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 11:11:20 compute-0 podman[250590]: 2025-12-09 11:11:20.963541693 +0000 UTC m=+0.116674624 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7)
Dec 09 11:11:20 compute-0 podman[250597]: 2025-12-09 11:11:20.983102592 +0000 UTC m=+0.118494043 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 09 11:11:21 compute-0 sshd-session[250615]: Invalid user dspace from 159.223.8.217 port 35070
Dec 09 11:11:21 compute-0 sshd-session[250615]: Connection closed by invalid user dspace 159.223.8.217 port 35070 [preauth]
Dec 09 11:11:22 compute-0 nova_compute[189493]: 2025-12-09 11:11:22.820 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:25 compute-0 nova_compute[189493]: 2025-12-09 11:11:25.566 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:27 compute-0 nova_compute[189493]: 2025-12-09 11:11:27.823 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:29 compute-0 podman[203687]: time="2025-12-09T11:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:11:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 11:11:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Dec 09 11:11:30 compute-0 nova_compute[189493]: 2025-12-09 11:11:30.570 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:31 compute-0 openstack_network_exporter[205823]: ERROR   11:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:11:31 compute-0 openstack_network_exporter[205823]: ERROR   11:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:11:31 compute-0 openstack_network_exporter[205823]: ERROR   11:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:11:31 compute-0 openstack_network_exporter[205823]: ERROR   11:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:11:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:11:31 compute-0 openstack_network_exporter[205823]: ERROR   11:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:11:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:11:32 compute-0 nova_compute[189493]: 2025-12-09 11:11:32.826 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:33 compute-0 podman[250656]: 2025-12-09 11:11:33.960161602 +0000 UTC m=+0.109659499 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 09 11:11:35 compute-0 nova_compute[189493]: 2025-12-09 11:11:35.573 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:37 compute-0 nova_compute[189493]: 2025-12-09 11:11:37.829 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:37 compute-0 podman[250676]: 2025-12-09 11:11:37.963352921 +0000 UTC m=+0.105432747 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 09 11:11:40 compute-0 nova_compute[189493]: 2025-12-09 11:11:40.577 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:40 compute-0 nova_compute[189493]: 2025-12-09 11:11:40.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:11:41 compute-0 podman[250700]: 2025-12-09 11:11:41.9835817 +0000 UTC m=+0.119929222 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 11:11:42 compute-0 podman[250699]: 2025-12-09 11:11:42.000050606 +0000 UTC m=+0.139942362 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, vcs-type=git, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1214.1726694543, config_id=edpm, version=9.4, io.buildah.version=1.29.0, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9)
Dec 09 11:11:42 compute-0 nova_compute[189493]: 2025-12-09 11:11:42.832 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:42 compute-0 nova_compute[189493]: 2025-12-09 11:11:42.835 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:11:42 compute-0 nova_compute[189493]: 2025-12-09 11:11:42.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:11:42 compute-0 nova_compute[189493]: 2025-12-09 11:11:42.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:11:44 compute-0 podman[250736]: 2025-12-09 11:11:44.742189233 +0000 UTC m=+0.071733963 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 09 11:11:44 compute-0 podman[250737]: 2025-12-09 11:11:44.742447131 +0000 UTC m=+0.065006286 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 09 11:11:44 compute-0 nova_compute[189493]: 2025-12-09 11:11:44.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:11:45 compute-0 nova_compute[189493]: 2025-12-09 11:11:45.579 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:45 compute-0 nova_compute[189493]: 2025-12-09 11:11:45.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:11:47 compute-0 nova_compute[189493]: 2025-12-09 11:11:47.834 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:48 compute-0 nova_compute[189493]: 2025-12-09 11:11:48.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:11:48 compute-0 nova_compute[189493]: 2025-12-09 11:11:48.988 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:11:48 compute-0 nova_compute[189493]: 2025-12-09 11:11:48.989 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:11:48 compute-0 nova_compute[189493]: 2025-12-09 11:11:48.989 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:11:48 compute-0 nova_compute[189493]: 2025-12-09 11:11:48.990 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.349 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.350 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5371MB free_disk=72.17669677734375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.351 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.351 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.446 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.447 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.490 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.509 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.510 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.511 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:11:50 compute-0 sshd-session[250775]: Invalid user dspace from 159.223.8.217 port 59642
Dec 09 11:11:50 compute-0 sshd-session[250775]: Connection closed by invalid user dspace 159.223.8.217 port 59642 [preauth]
Dec 09 11:11:50 compute-0 nova_compute[189493]: 2025-12-09 11:11:50.510 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:11:50 compute-0 nova_compute[189493]: 2025-12-09 11:11:50.581 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:51 compute-0 nova_compute[189493]: 2025-12-09 11:11:51.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:11:51 compute-0 nova_compute[189493]: 2025-12-09 11:11:51.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 11:11:51 compute-0 nova_compute[189493]: 2025-12-09 11:11:51.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 11:11:51 compute-0 nova_compute[189493]: 2025-12-09 11:11:51.861 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 11:11:51 compute-0 podman[250777]: 2025-12-09 11:11:51.975875109 +0000 UTC m=+0.117922148 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6)
Dec 09 11:11:51 compute-0 podman[250778]: 2025-12-09 11:11:51.999028293 +0000 UTC m=+0.134479907 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 11:11:52 compute-0 podman[250779]: 2025-12-09 11:11:52.018855909 +0000 UTC m=+0.149147916 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 09 11:11:52 compute-0 nova_compute[189493]: 2025-12-09 11:11:52.837 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:55 compute-0 nova_compute[189493]: 2025-12-09 11:11:55.583 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:55 compute-0 nova_compute[189493]: 2025-12-09 11:11:55.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:11:55 compute-0 nova_compute[189493]: 2025-12-09 11:11:55.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 11:11:57 compute-0 nova_compute[189493]: 2025-12-09 11:11:57.840 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:11:59 compute-0 podman[203687]: time="2025-12-09T11:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:11:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 11:11:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Dec 09 11:12:00 compute-0 nova_compute[189493]: 2025-12-09 11:12:00.585 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:01 compute-0 openstack_network_exporter[205823]: ERROR   11:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:12:01 compute-0 openstack_network_exporter[205823]: ERROR   11:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:12:01 compute-0 openstack_network_exporter[205823]: ERROR   11:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:12:01 compute-0 openstack_network_exporter[205823]: ERROR   11:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:12:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:12:01 compute-0 openstack_network_exporter[205823]: ERROR   11:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:12:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:12:02 compute-0 nova_compute[189493]: 2025-12-09 11:12:02.844 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:05 compute-0 podman[250839]: 2025-12-09 11:12:05.017521173 +0000 UTC m=+0.147233335 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd)
Dec 09 11:12:05 compute-0 nova_compute[189493]: 2025-12-09 11:12:05.589 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:07 compute-0 nova_compute[189493]: 2025-12-09 11:12:07.847 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:08 compute-0 podman[250859]: 2025-12-09 11:12:08.974118815 +0000 UTC m=+0.123406324 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 11:12:10 compute-0 nova_compute[189493]: 2025-12-09 11:12:10.592 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:12 compute-0 nova_compute[189493]: 2025-12-09 11:12:12.849 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:12 compute-0 podman[250884]: 2025-12-09 11:12:12.970131194 +0000 UTC m=+0.107050720 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Dec 09 11:12:13 compute-0 podman[250883]: 2025-12-09 11:12:13.013941506 +0000 UTC m=+0.155587427 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, config_id=edpm, architecture=x86_64, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.expose-services=)
Dec 09 11:12:14 compute-0 podman[250920]: 2025-12-09 11:12:14.954503435 +0000 UTC m=+0.101434830 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 11:12:14 compute-0 podman[250921]: 2025-12-09 11:12:14.957667509 +0000 UTC m=+0.100692361 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 09 11:12:15 compute-0 nova_compute[189493]: 2025-12-09 11:12:15.595 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:12:17.011 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:12:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:12:17.012 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:12:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:12:17.012 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:12:17 compute-0 nova_compute[189493]: 2025-12-09 11:12:17.854 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:19 compute-0 sshd-session[250955]: Invalid user dspace from 159.223.8.217 port 49832
Dec 09 11:12:19 compute-0 sshd-session[250955]: Connection closed by invalid user dspace 159.223.8.217 port 49832 [preauth]
Dec 09 11:12:20 compute-0 nova_compute[189493]: 2025-12-09 11:12:20.598 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:22 compute-0 nova_compute[189493]: 2025-12-09 11:12:22.858 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:22 compute-0 podman[250958]: 2025-12-09 11:12:22.936725331 +0000 UTC m=+0.088364205 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7)
Dec 09 11:12:22 compute-0 podman[250959]: 2025-12-09 11:12:22.939183786 +0000 UTC m=+0.085574971 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 11:12:22 compute-0 podman[250960]: 2025-12-09 11:12:22.969205592 +0000 UTC m=+0.120626820 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.300 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.300 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.315 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.316 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.316 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:12:25 compute-0 nova_compute[189493]: 2025-12-09 11:12:25.601 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:27 compute-0 nova_compute[189493]: 2025-12-09 11:12:27.868 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:29 compute-0 podman[203687]: time="2025-12-09T11:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:12:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 11:12:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4331 "" "Go-http-client/1.1"
Dec 09 11:12:30 compute-0 nova_compute[189493]: 2025-12-09 11:12:30.604 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:31 compute-0 openstack_network_exporter[205823]: ERROR   11:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:12:31 compute-0 openstack_network_exporter[205823]: ERROR   11:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:12:31 compute-0 openstack_network_exporter[205823]: ERROR   11:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:12:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:12:31 compute-0 openstack_network_exporter[205823]: ERROR   11:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:12:31 compute-0 openstack_network_exporter[205823]: ERROR   11:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:12:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:12:32 compute-0 nova_compute[189493]: 2025-12-09 11:12:32.868 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:35 compute-0 nova_compute[189493]: 2025-12-09 11:12:35.606 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:35 compute-0 podman[251025]: 2025-12-09 11:12:35.948996125 +0000 UTC m=+0.098797942 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 09 11:12:37 compute-0 nova_compute[189493]: 2025-12-09 11:12:37.870 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:39 compute-0 podman[251045]: 2025-12-09 11:12:39.957839472 +0000 UTC m=+0.107539263 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 09 11:12:40 compute-0 nova_compute[189493]: 2025-12-09 11:12:40.609 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:41 compute-0 nova_compute[189493]: 2025-12-09 11:12:41.844 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:12:42 compute-0 nova_compute[189493]: 2025-12-09 11:12:42.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:12:42 compute-0 nova_compute[189493]: 2025-12-09 11:12:42.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:12:42 compute-0 nova_compute[189493]: 2025-12-09 11:12:42.873 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:43 compute-0 nova_compute[189493]: 2025-12-09 11:12:43.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:12:43 compute-0 podman[251066]: 2025-12-09 11:12:43.939445637 +0000 UTC m=+0.092146495 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, version=9.4, config_id=edpm, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-type=git, container_name=kepler, io.buildah.version=1.29.0, distribution-scope=public, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc.)
Dec 09 11:12:43 compute-0 podman[251067]: 2025-12-09 11:12:43.96103472 +0000 UTC m=+0.098705040 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 09 11:12:45 compute-0 nova_compute[189493]: 2025-12-09 11:12:45.613 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:45 compute-0 podman[251102]: 2025-12-09 11:12:45.945230717 +0000 UTC m=+0.089165415 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 09 11:12:45 compute-0 podman[251103]: 2025-12-09 11:12:45.969420359 +0000 UTC m=+0.107684707 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 09 11:12:47 compute-0 sshd-session[251141]: Invalid user dspace from 159.223.8.217 port 48404
Dec 09 11:12:47 compute-0 sshd-session[251141]: Connection closed by invalid user dspace 159.223.8.217 port 48404 [preauth]
Dec 09 11:12:47 compute-0 nova_compute[189493]: 2025-12-09 11:12:47.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:12:47 compute-0 nova_compute[189493]: 2025-12-09 11:12:47.876 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:49 compute-0 nova_compute[189493]: 2025-12-09 11:12:49.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:12:50 compute-0 nova_compute[189493]: 2025-12-09 11:12:50.616 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:50 compute-0 nova_compute[189493]: 2025-12-09 11:12:50.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:12:52 compute-0 nova_compute[189493]: 2025-12-09 11:12:52.877 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:53 compute-0 nova_compute[189493]: 2025-12-09 11:12:53.329 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:12:53 compute-0 nova_compute[189493]: 2025-12-09 11:12:53.330 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:12:53 compute-0 nova_compute[189493]: 2025-12-09 11:12:53.330 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:12:53 compute-0 nova_compute[189493]: 2025-12-09 11:12:53.331 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 11:12:53 compute-0 nova_compute[189493]: 2025-12-09 11:12:53.725 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 11:12:53 compute-0 nova_compute[189493]: 2025-12-09 11:12:53.726 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5385MB free_disk=72.17579650878906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 11:12:53 compute-0 nova_compute[189493]: 2025-12-09 11:12:53.727 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:12:53 compute-0 nova_compute[189493]: 2025-12-09 11:12:53.727 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:12:53 compute-0 podman[251145]: 2025-12-09 11:12:53.919054139 +0000 UTC m=+0.065015235 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 09 11:12:53 compute-0 podman[251144]: 2025-12-09 11:12:53.935135106 +0000 UTC m=+0.076978912 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 09 11:12:54 compute-0 podman[251146]: 2025-12-09 11:12:54.000585731 +0000 UTC m=+0.137757704 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 09 11:12:54 compute-0 nova_compute[189493]: 2025-12-09 11:12:54.121 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 11:12:54 compute-0 nova_compute[189493]: 2025-12-09 11:12:54.122 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 11:12:54 compute-0 nova_compute[189493]: 2025-12-09 11:12:54.194 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:12:54 compute-0 nova_compute[189493]: 2025-12-09 11:12:54.235 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:12:54 compute-0 nova_compute[189493]: 2025-12-09 11:12:54.237 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 11:12:54 compute-0 nova_compute[189493]: 2025-12-09 11:12:54.237 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:12:55 compute-0 nova_compute[189493]: 2025-12-09 11:12:55.621 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:57 compute-0 nova_compute[189493]: 2025-12-09 11:12:57.239 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:12:57 compute-0 nova_compute[189493]: 2025-12-09 11:12:57.240 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 11:12:57 compute-0 nova_compute[189493]: 2025-12-09 11:12:57.241 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 11:12:57 compute-0 nova_compute[189493]: 2025-12-09 11:12:57.277 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 11:12:57 compute-0 nova_compute[189493]: 2025-12-09 11:12:57.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:12:57 compute-0 nova_compute[189493]: 2025-12-09 11:12:57.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 11:12:57 compute-0 nova_compute[189493]: 2025-12-09 11:12:57.880 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:12:59 compute-0 podman[203687]: time="2025-12-09T11:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:12:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 11:12:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Dec 09 11:13:00 compute-0 nova_compute[189493]: 2025-12-09 11:13:00.624 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:01 compute-0 openstack_network_exporter[205823]: ERROR   11:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:13:01 compute-0 openstack_network_exporter[205823]: ERROR   11:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:13:01 compute-0 openstack_network_exporter[205823]: ERROR   11:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:13:01 compute-0 openstack_network_exporter[205823]: ERROR   11:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:13:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:13:01 compute-0 openstack_network_exporter[205823]: ERROR   11:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:13:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:13:02 compute-0 nova_compute[189493]: 2025-12-09 11:13:02.883 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:04 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:13:04.399 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 11:13:04 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:13:04.400 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 09 11:13:04 compute-0 nova_compute[189493]: 2025-12-09 11:13:04.405 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:13:05.403 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:13:05 compute-0 nova_compute[189493]: 2025-12-09 11:13:05.627 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:06 compute-0 podman[251208]: 2025-12-09 11:13:06.97689641 +0000 UTC m=+0.119328385 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 09 11:13:07 compute-0 nova_compute[189493]: 2025-12-09 11:13:07.886 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:10 compute-0 nova_compute[189493]: 2025-12-09 11:13:10.631 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:10 compute-0 podman[251227]: 2025-12-09 11:13:10.929384454 +0000 UTC m=+0.081510593 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 09 11:13:12 compute-0 nova_compute[189493]: 2025-12-09 11:13:12.890 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:14 compute-0 podman[251252]: 2025-12-09 11:13:14.869419218 +0000 UTC m=+0.145179291 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 09 11:13:14 compute-0 podman[251251]: 2025-12-09 11:13:14.873015353 +0000 UTC m=+0.155644148 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, container_name=kepler, release-0.7.12=, vcs-type=git, version=9.4, com.redhat.component=ubi9-container)
Dec 09 11:13:15 compute-0 sshd-session[251286]: Invalid user dspace from 159.223.8.217 port 41362
Dec 09 11:13:15 compute-0 sshd-session[251286]: Connection closed by invalid user dspace 159.223.8.217 port 41362 [preauth]
Dec 09 11:13:15 compute-0 nova_compute[189493]: 2025-12-09 11:13:15.634 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:16 compute-0 podman[251288]: 2025-12-09 11:13:16.974272655 +0000 UTC m=+0.118711918 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Dec 09 11:13:17 compute-0 podman[251289]: 2025-12-09 11:13:17.00347245 +0000 UTC m=+0.142411857 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 09 11:13:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:13:17.012 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:13:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:13:17.013 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:13:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:13:17.014 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:13:17 compute-0 nova_compute[189493]: 2025-12-09 11:13:17.893 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:20 compute-0 nova_compute[189493]: 2025-12-09 11:13:20.637 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:22 compute-0 nova_compute[189493]: 2025-12-09 11:13:22.896 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:24 compute-0 podman[251326]: 2025-12-09 11:13:24.96516211 +0000 UTC m=+0.102684055 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, architecture=x86_64)
Dec 09 11:13:24 compute-0 podman[251327]: 2025-12-09 11:13:24.986815553 +0000 UTC m=+0.126385992 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 11:13:25 compute-0 podman[251328]: 2025-12-09 11:13:25.008990831 +0000 UTC m=+0.145880059 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 09 11:13:25 compute-0 nova_compute[189493]: 2025-12-09 11:13:25.640 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:27 compute-0 nova_compute[189493]: 2025-12-09 11:13:27.899 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:29 compute-0 podman[203687]: time="2025-12-09T11:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:13:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 11:13:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Dec 09 11:13:30 compute-0 nova_compute[189493]: 2025-12-09 11:13:30.644 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:31 compute-0 openstack_network_exporter[205823]: ERROR   11:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:13:31 compute-0 openstack_network_exporter[205823]: ERROR   11:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:13:31 compute-0 openstack_network_exporter[205823]: ERROR   11:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:13:31 compute-0 openstack_network_exporter[205823]: ERROR   11:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:13:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:13:31 compute-0 openstack_network_exporter[205823]: ERROR   11:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:13:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:13:32 compute-0 nova_compute[189493]: 2025-12-09 11:13:32.904 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:34 compute-0 ovn_controller[97780]: 2025-12-09T11:13:34Z|00065|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec 09 11:13:35 compute-0 nova_compute[189493]: 2025-12-09 11:13:35.646 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:37 compute-0 nova_compute[189493]: 2025-12-09 11:13:37.907 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:37 compute-0 podman[251393]: 2025-12-09 11:13:37.991192549 +0000 UTC m=+0.136244744 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 09 11:13:40 compute-0 nova_compute[189493]: 2025-12-09 11:13:40.648 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:41 compute-0 podman[251413]: 2025-12-09 11:13:41.951872038 +0000 UTC m=+0.089539906 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 09 11:13:42 compute-0 nova_compute[189493]: 2025-12-09 11:13:42.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:13:42 compute-0 nova_compute[189493]: 2025-12-09 11:13:42.910 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:43 compute-0 sshd-session[251436]: Invalid user dspace from 159.223.8.217 port 60290
Dec 09 11:13:43 compute-0 sshd-session[251436]: Connection closed by invalid user dspace 159.223.8.217 port 60290 [preauth]
Dec 09 11:13:44 compute-0 nova_compute[189493]: 2025-12-09 11:13:44.155 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:13:44 compute-0 nova_compute[189493]: 2025-12-09 11:13:44.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:13:45 compute-0 nova_compute[189493]: 2025-12-09 11:13:45.063 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:13:45 compute-0 nova_compute[189493]: 2025-12-09 11:13:45.063 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:13:45 compute-0 nova_compute[189493]: 2025-12-09 11:13:45.650 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:45 compute-0 podman[251438]: 2025-12-09 11:13:45.952910129 +0000 UTC m=+0.102702944 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, io.buildah.version=1.29.0, release=1214.1726694543, distribution-scope=public, name=ubi9, io.openshift.tags=base rhel9)
Dec 09 11:13:45 compute-0 podman[251439]: 2025-12-09 11:13:45.956930216 +0000 UTC m=+0.090607594 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 09 11:13:47 compute-0 nova_compute[189493]: 2025-12-09 11:13:47.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:13:47 compute-0 nova_compute[189493]: 2025-12-09 11:13:47.913 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:47 compute-0 podman[251474]: 2025-12-09 11:13:47.985081129 +0000 UTC m=+0.132308780 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 09 11:13:47 compute-0 podman[251475]: 2025-12-09 11:13:47.987686528 +0000 UTC m=+0.118771710 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec 09 11:13:50 compute-0 nova_compute[189493]: 2025-12-09 11:13:50.653 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:50 compute-0 nova_compute[189493]: 2025-12-09 11:13:50.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:13:51 compute-0 nova_compute[189493]: 2025-12-09 11:13:51.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:13:52 compute-0 nova_compute[189493]: 2025-12-09 11:13:52.918 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:53 compute-0 nova_compute[189493]: 2025-12-09 11:13:53.172 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:13:53 compute-0 nova_compute[189493]: 2025-12-09 11:13:53.173 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:13:53 compute-0 nova_compute[189493]: 2025-12-09 11:13:53.173 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:13:53 compute-0 nova_compute[189493]: 2025-12-09 11:13:53.174 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 11:13:53 compute-0 nova_compute[189493]: 2025-12-09 11:13:53.610 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 11:13:53 compute-0 nova_compute[189493]: 2025-12-09 11:13:53.611 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5383MB free_disk=72.17579650878906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 11:13:53 compute-0 nova_compute[189493]: 2025-12-09 11:13:53.612 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:13:53 compute-0 nova_compute[189493]: 2025-12-09 11:13:53.612 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:13:55 compute-0 nova_compute[189493]: 2025-12-09 11:13:55.656 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:55 compute-0 podman[251516]: 2025-12-09 11:13:55.975352068 +0000 UTC m=+0.117171829 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 11:13:55 compute-0 podman[251515]: 2025-12-09 11:13:55.976453628 +0000 UTC m=+0.125960192 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container)
Dec 09 11:13:56 compute-0 podman[251517]: 2025-12-09 11:13:56.007696416 +0000 UTC m=+0.153713947 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 11:13:57 compute-0 nova_compute[189493]: 2025-12-09 11:13:57.922 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:13:59 compute-0 podman[203687]: time="2025-12-09T11:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:13:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 11:13:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Dec 09 11:14:00 compute-0 nova_compute[189493]: 2025-12-09 11:14:00.662 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:01 compute-0 openstack_network_exporter[205823]: ERROR   11:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:14:01 compute-0 openstack_network_exporter[205823]: ERROR   11:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:14:01 compute-0 openstack_network_exporter[205823]: ERROR   11:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:14:01 compute-0 openstack_network_exporter[205823]: ERROR   11:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:14:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:14:01 compute-0 openstack_network_exporter[205823]: ERROR   11:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:14:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:14:01 compute-0 nova_compute[189493]: 2025-12-09 11:14:01.875 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 11:14:01 compute-0 nova_compute[189493]: 2025-12-09 11:14:01.876 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 11:14:02 compute-0 nova_compute[189493]: 2025-12-09 11:14:02.276 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:14:02 compute-0 nova_compute[189493]: 2025-12-09 11:14:02.461 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:14:02 compute-0 nova_compute[189493]: 2025-12-09 11:14:02.463 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 11:14:02 compute-0 nova_compute[189493]: 2025-12-09 11:14:02.464 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 8.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:14:02 compute-0 nova_compute[189493]: 2025-12-09 11:14:02.930 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:04 compute-0 nova_compute[189493]: 2025-12-09 11:14:04.465 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:14:04 compute-0 nova_compute[189493]: 2025-12-09 11:14:04.465 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 11:14:04 compute-0 nova_compute[189493]: 2025-12-09 11:14:04.465 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 11:14:04 compute-0 nova_compute[189493]: 2025-12-09 11:14:04.547 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 11:14:04 compute-0 nova_compute[189493]: 2025-12-09 11:14:04.547 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:14:04 compute-0 nova_compute[189493]: 2025-12-09 11:14:04.548 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 11:14:04 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:14:04.897 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 11:14:04 compute-0 nova_compute[189493]: 2025-12-09 11:14:04.899 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:04 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:14:04.899 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 09 11:14:05 compute-0 nova_compute[189493]: 2025-12-09 11:14:05.664 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:07 compute-0 nova_compute[189493]: 2025-12-09 11:14:07.931 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:08 compute-0 podman[251580]: 2025-12-09 11:14:08.989095629 +0000 UTC m=+0.122836790 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd)
Dec 09 11:14:10 compute-0 nova_compute[189493]: 2025-12-09 11:14:10.667 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:11 compute-0 sshd-session[251598]: Invalid user dspace from 159.223.8.217 port 51704
Dec 09 11:14:11 compute-0 sshd-session[251598]: Connection closed by invalid user dspace 159.223.8.217 port 51704 [preauth]
Dec 09 11:14:12 compute-0 podman[251600]: 2025-12-09 11:14:12.925713961 +0000 UTC m=+0.080131334 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 09 11:14:12 compute-0 nova_compute[189493]: 2025-12-09 11:14:12.937 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:14 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:14:14.903 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:14:15 compute-0 nova_compute[189493]: 2025-12-09 11:14:15.671 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:16 compute-0 podman[251623]: 2025-12-09 11:14:16.936296056 +0000 UTC m=+0.091748117 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, managed_by=edpm_ansible, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9, version=9.4, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30)
Dec 09 11:14:16 compute-0 podman[251624]: 2025-12-09 11:14:16.948927365 +0000 UTC m=+0.102533168 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 09 11:14:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:14:17.014 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:14:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:14:17.014 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:14:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:14:17.015 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:14:17 compute-0 nova_compute[189493]: 2025-12-09 11:14:17.938 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:18 compute-0 podman[251662]: 2025-12-09 11:14:18.956499122 +0000 UTC m=+0.109398599 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 09 11:14:18 compute-0 podman[251661]: 2025-12-09 11:14:18.970054197 +0000 UTC m=+0.119057652 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 09 11:14:20 compute-0 nova_compute[189493]: 2025-12-09 11:14:20.674 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:22 compute-0 nova_compute[189493]: 2025-12-09 11:14:22.942 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.301 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.302 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.310 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.311 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.315 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.317 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.317 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.318 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.318 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec 09 11:14:25 compute-0 nova_compute[189493]: 2025-12-09 11:14:25.677 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:26 compute-0 podman[251703]: 2025-12-09 11:14:26.980613049 +0000 UTC m=+0.113456275 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 09 11:14:26 compute-0 podman[251702]: 2025-12-09 11:14:26.992957092 +0000 UTC m=+0.134718092 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, vendor=Red Hat, Inc.)
Dec 09 11:14:27 compute-0 podman[251704]: 2025-12-09 11:14:27.012914663 +0000 UTC m=+0.141562050 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:14:27 compute-0 nova_compute[189493]: [SQL: SELECT 1]
Dec 09 11:14:27 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:14:27 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 74, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n", '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 120, in __init__\n    self.dispatch.engine_connect(self, _branch_from is not None)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/event/attr.py", line 334, in __call__\n    fn(*args, **kw)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 84, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1806, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db     raise result
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db [SQL: SELECT 1]
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 74, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n", '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 120, in __init__\n    self.dispatch.engine_connect(self, _branch_from is not None)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/event/attr.py", line 334, in __call__\n    fn(*args, **kw)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 84, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1806, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._in
Dec 09 11:14:27 compute-0 nova_compute[189493]: voke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db 
Dec 09 11:14:27 compute-0 rsyslogd[236818]: message too long (15362) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-pack [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:14:27 compute-0 rsyslogd[236818]: message too long (14504) with configured size 8096, begin of message is: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.944 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:29 compute-0 podman[203687]: time="2025-12-09T11:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:14:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 11:14:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Dec 09 11:14:30 compute-0 ovn_controller[97780]: 2025-12-09T11:14:30Z|00066|ovsdb_idl|WARN|transaction error: {"details":"Transaction causes multiple rows in \"MAC_Binding\" table to have identical values (lrp-fe50431a-d63b-4063-9df9-acee8df3bf71 and \"192.168.122.80\") for index on columns \"logical_port\" and \"ip\".  First row, with UUID 563226b4-7651-4a2e-af7c-224282404230, was inserted by this transaction.  Second row, with UUID 4b89d1fa-5e73-4206-8c2b-589a01ad469a, existed in the database before this transaction and was not modified by the transaction.","error":"constraint violation"}
Dec 09 11:14:30 compute-0 ovn_controller[97780]: 2025-12-09T11:14:30Z|00067|main|INFO|OVNSB commit failed, force recompute next time.
Dec 09 11:14:30 compute-0 nova_compute[189493]: 2025-12-09 11:14:30.437 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:30 compute-0 nova_compute[189493]: 2025-12-09 11:14:30.541 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:30 compute-0 nova_compute[189493]: 2025-12-09 11:14:30.679 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:31 compute-0 openstack_network_exporter[205823]: ERROR   11:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:14:31 compute-0 openstack_network_exporter[205823]: ERROR   11:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:14:31 compute-0 openstack_network_exporter[205823]: ERROR   11:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:14:31 compute-0 openstack_network_exporter[205823]: ERROR   11:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:14:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:14:31 compute-0 openstack_network_exporter[205823]: ERROR   11:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:14:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:14:32 compute-0 nova_compute[189493]: 2025-12-09 11:14:32.946 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:33 compute-0 ovn_controller[97780]: 2025-12-09T11:14:33Z|00068|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.682 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:14:35 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:14:35 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db     raise result
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db 
Dec 09 11:14:36 compute-0 rsyslogd[236818]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:14:36 compute-0 rsyslogd[236818]: message too long (9052) with configured size 8096, begin of message is: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:14:37 compute-0 nova_compute[189493]: 2025-12-09 11:14:37.951 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:39 compute-0 sshd-session[251767]: Invalid user dspace from 159.223.8.217 port 41486
Dec 09 11:14:39 compute-0 sshd-session[251767]: Connection closed by invalid user dspace 159.223.8.217 port 41486 [preauth]
Dec 09 11:14:39 compute-0 podman[251769]: 2025-12-09 11:14:39.466387967 +0000 UTC m=+0.091193013 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec 09 11:14:40 compute-0 nova_compute[189493]: 2025-12-09 11:14:40.685 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:42 compute-0 nova_compute[189493]: 2025-12-09 11:14:42.848 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:14:42 compute-0 nova_compute[189493]: 2025-12-09 11:14:42.955 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:44 compute-0 nova_compute[189493]: 2025-12-09 11:14:44.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:14:44 compute-0 nova_compute[189493]: 2025-12-09 11:14:44.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:14:44 compute-0 podman[251789]: 2025-12-09 11:14:44.883055285 +0000 UTC m=+0.101499753 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.688 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:14:45 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:14:45 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db     raise result
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db 
Dec 09 11:14:46 compute-0 rsyslogd[236818]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:14:46 compute-0 rsyslogd[236818]: message too long (9052) with configured size 8096, begin of message is: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:14:46 compute-0 nova_compute[189493]: 2025-12-09 11:14:46.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:14:47 compute-0 nova_compute[189493]: 2025-12-09 11:14:47.957 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:47 compute-0 podman[251813]: 2025-12-09 11:14:47.979646562 +0000 UTC m=+0.117078230 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, name=ubi9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, architecture=x86_64, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, version=9.4, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec 09 11:14:48 compute-0 podman[251814]: 2025-12-09 11:14:48.001134454 +0000 UTC m=+0.132039321 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 09 11:14:49 compute-0 nova_compute[189493]: 2025-12-09 11:14:49.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:14:49 compute-0 podman[251852]: 2025-12-09 11:14:49.985015101 +0000 UTC m=+0.118527677 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 09 11:14:49 compute-0 podman[251851]: 2025-12-09 11:14:49.988022089 +0000 UTC m=+0.127475040 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Dec 09 11:14:50 compute-0 nova_compute[189493]: 2025-12-09 11:14:50.692 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:50 compute-0 nova_compute[189493]: 2025-12-09 11:14:50.844 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:14:52 compute-0 nova_compute[189493]: 2025-12-09 11:14:52.962 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Error during ComputeManager.update_available_resource: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:14:53 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:14:53 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task Traceback (most recent call last):
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     task(self, context)
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10584, in update_available_resource
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     compute_nodes_in_db = self._get_compute_nodes_in_db(context,
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10631, in _get_compute_nodes_in_db
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     return objects.ComputeNodeList.get_all_by_host(context, self.host,
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     result = self.transport._send(
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     raise result
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task 
Dec 09 11:14:54 compute-0 rsyslogd[236818]: message too long (8132) with configured size 8096, begin of message is: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:14:55 compute-0 nova_compute[189493]: 2025-12-09 11:14:55.695 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:55 compute-0 nova_compute[189493]: 2025-12-09 11:14:55.899 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:14:55 compute-0 nova_compute[189493]: 2025-12-09 11:14:55.900 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 11:14:55 compute-0 nova_compute[189493]: 2025-12-09 11:14:55.900 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:14:56 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:14:56 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Dec 09 11:14:56 compute-0 rsyslogd[236818]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db     raise result
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db 
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Error during ComputeManager._heal_instance_info_cache: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:14:56 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:14:56 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task Traceback (most recent call last):
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task     task(self, context)
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 9863, in _heal_instance_info_cache
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task     db_instances = objects.InstanceList.get_by_host(
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task     result = self.transport._send(
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec 09 11:14:56 compute-0 rsyslogd[236818]: message too long (9052) with configured size 8096, begin of message is: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task     raise result
Dec 09 11:14:56 compute-0 rsyslogd[236818]: message too long (8558) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task 
Dec 09 11:14:56 compute-0 rsyslogd[236818]: message too long (8622) with configured size 8096, begin of message is: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:14:57 compute-0 podman[251888]: 2025-12-09 11:14:57.956634506 +0000 UTC m=+0.101378881 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, maintainer=Red Hat, Inc.)
Dec 09 11:14:57 compute-0 nova_compute[189493]: 2025-12-09 11:14:57.972 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:14:57 compute-0 podman[251889]: 2025-12-09 11:14:57.992159903 +0000 UTC m=+0.128297043 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 09 11:14:58 compute-0 podman[251890]: 2025-12-09 11:14:58.037072706 +0000 UTC m=+0.168058971 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 09 11:14:59 compute-0 podman[203687]: time="2025-12-09T11:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:14:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 11:14:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Error during ComputeManager._cleanup_incomplete_migrations: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:14:59 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:14:59 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/migration.py", line 266, in get_by_filters\n    db_migrations = db.migration_get_all_by_filters(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 3457, in migration_get_all_by_filters\n    return query.all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task Traceback (most recent call last):
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task     task(self, context)
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11186, in _cleanup_incomplete_migrations
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task     migrations = objects.MigrationList.get_by_filters(context,
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task     result = self.transport._send(
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task     raise result
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/migration.py", line 266, in get_by_filters\n    db_migrations = db.migration_get_all_by_filters(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 3457, in migration_get_all_by_filters\n    return query.all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task 
Dec 09 11:15:00 compute-0 rsyslogd[236818]: message too long (8248) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:15:00 compute-0 rsyslogd[236818]: message too long (8312) with configured size 8096, begin of message is: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.700 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Error during ComputeManager._cleanup_expired_console_auth_tokens: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:15:00 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:15:00 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/console_auth_token.py", line 182, in clean_expired_console_auths\n    db.console_auth_token_destroy_expired(context)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 4886, in console_auth_token_destroy_expired\n    context.session.query(models.ConsoleAuthToken).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3222, in delete\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task Traceback (most recent call last):
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task     task(self, context)
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11282, in _cleanup_expired_console_auth_tokens
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task     objects.ConsoleAuthToken.clean_expired_console_auths(context)
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task     result = self.transport._send(
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task     raise result
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/console_auth_token.py", line 182, in clean_expired_console_auths\n    db.console_auth_token_destroy_expired(context)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 4886, in console_auth_token_destroy_expired\n    context.session.query(models.ConsoleAuthToken).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3222, in delete\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task 
Dec 09 11:15:00 compute-0 rsyslogd[236818]: message too long (8183) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:15:01 compute-0 rsyslogd[236818]: message too long (8247) with configured size 8096, begin of message is: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:15:01 compute-0 openstack_network_exporter[205823]: ERROR   11:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:15:01 compute-0 openstack_network_exporter[205823]: ERROR   11:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:15:01 compute-0 openstack_network_exporter[205823]: ERROR   11:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:15:01 compute-0 openstack_network_exporter[205823]: ERROR   11:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:15:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:15:01 compute-0 openstack_network_exporter[205823]: ERROR   11:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:15:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:15:02 compute-0 nova_compute[189493]: 2025-12-09 11:15:02.971 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:05 compute-0 nova_compute[189493]: 2025-12-09 11:15:05.702 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:15:06 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:15:06 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db     raise result
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec 09 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db 
Dec 09 11:15:06 compute-0 rsyslogd[236818]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:15:06 compute-0 rsyslogd[236818]: message too long (9052) with configured size 8096, begin of message is: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 09 11:15:07 compute-0 sshd-session[251953]: Invalid user dspace from 159.223.8.217 port 35658
Dec 09 11:15:07 compute-0 sshd-session[251953]: Connection closed by invalid user dspace 159.223.8.217 port 35658 [preauth]
Dec 09 11:15:07 compute-0 nova_compute[189493]: 2025-12-09 11:15:07.974 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:09 compute-0 podman[251955]: 2025-12-09 11:15:09.994564232 +0000 UTC m=+0.129076693 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 09 11:15:10 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:15:10.704 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 09 11:15:10 compute-0 nova_compute[189493]: 2025-12-09 11:15:10.706 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:10 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:15:10.707 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 09 11:15:12 compute-0 nova_compute[189493]: 2025-12-09 11:15:12.977 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:13 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:15:13.710 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 09 11:15:15 compute-0 nova_compute[189493]: 2025-12-09 11:15:15.710 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:15 compute-0 nova_compute[189493]: 2025-12-09 11:15:15.805 189497 INFO nova.servicegroup.drivers.db [-] Recovered from being unable to report status.
Dec 09 11:15:15 compute-0 podman[251974]: 2025-12-09 11:15:15.962926013 +0000 UTC m=+0.108606038 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 09 11:15:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:15:17.014 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:15:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:15:17.015 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:15:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:15:17.015 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:15:17 compute-0 nova_compute[189493]: 2025-12-09 11:15:17.979 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:18 compute-0 podman[251998]: 2025-12-09 11:15:18.969393076 +0000 UTC m=+0.109118111 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, release=1214.1726694543, io.openshift.expose-services=, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, version=9.4, config_id=edpm, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, container_name=kepler, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, architecture=x86_64, build-date=2024-09-18T21:23:30)
Dec 09 11:15:18 compute-0 podman[251999]: 2025-12-09 11:15:18.988722221 +0000 UTC m=+0.136914968 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 09 11:15:20 compute-0 nova_compute[189493]: 2025-12-09 11:15:20.712 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:20 compute-0 podman[252036]: 2025-12-09 11:15:20.997376357 +0000 UTC m=+0.143246192 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 09 11:15:21 compute-0 podman[252037]: 2025-12-09 11:15:21.019871605 +0000 UTC m=+0.150599525 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec 09 11:15:22 compute-0 nova_compute[189493]: 2025-12-09 11:15:22.982 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:24 compute-0 nova_compute[189493]: 2025-12-09 11:15:24.989 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:15:24 compute-0 nova_compute[189493]: 2025-12-09 11:15:24.990 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 09 11:15:25 compute-0 nova_compute[189493]: 2025-12-09 11:15:25.110 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 09 11:15:25 compute-0 nova_compute[189493]: 2025-12-09 11:15:25.715 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:27 compute-0 nova_compute[189493]: 2025-12-09 11:15:27.985 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:28 compute-0 podman[252073]: 2025-12-09 11:15:28.968590672 +0000 UTC m=+0.109113861 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 09 11:15:28 compute-0 podman[252072]: 2025-12-09 11:15:28.969155387 +0000 UTC m=+0.118896027 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 09 11:15:29 compute-0 podman[252074]: 2025-12-09 11:15:29.006223495 +0000 UTC m=+0.136278770 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec 09 11:15:29 compute-0 podman[203687]: time="2025-12-09T11:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:15:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 11:15:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4332 "" "Go-http-client/1.1"
Dec 09 11:15:30 compute-0 nova_compute[189493]: 2025-12-09 11:15:30.719 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:31 compute-0 openstack_network_exporter[205823]: ERROR   11:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:15:31 compute-0 openstack_network_exporter[205823]: ERROR   11:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:15:31 compute-0 openstack_network_exporter[205823]: ERROR   11:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:15:31 compute-0 openstack_network_exporter[205823]: ERROR   11:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:15:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:15:31 compute-0 openstack_network_exporter[205823]: ERROR   11:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:15:31 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:15:32 compute-0 nova_compute[189493]: 2025-12-09 11:15:32.989 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:35 compute-0 nova_compute[189493]: 2025-12-09 11:15:35.723 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:37 compute-0 sshd-session[252144]: Invalid user dspace from 159.223.8.217 port 54468
Dec 09 11:15:37 compute-0 sshd-session[252144]: Connection closed by invalid user dspace 159.223.8.217 port 54468 [preauth]
Dec 09 11:15:37 compute-0 nova_compute[189493]: 2025-12-09 11:15:37.991 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:40 compute-0 nova_compute[189493]: 2025-12-09 11:15:40.727 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:40 compute-0 podman[252146]: 2025-12-09 11:15:40.972421187 +0000 UTC m=+0.115049457 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 09 11:15:42 compute-0 nova_compute[189493]: 2025-12-09 11:15:42.995 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:44 compute-0 nova_compute[189493]: 2025-12-09 11:15:44.957 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:15:44 compute-0 nova_compute[189493]: 2025-12-09 11:15:44.958 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:15:45 compute-0 nova_compute[189493]: 2025-12-09 11:15:45.730 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:45 compute-0 nova_compute[189493]: 2025-12-09 11:15:45.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:15:46 compute-0 podman[252166]: 2025-12-09 11:15:46.97417099 +0000 UTC m=+0.119957615 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 09 11:15:48 compute-0 nova_compute[189493]: 2025-12-09 11:15:47.998 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:48 compute-0 nova_compute[189493]: 2025-12-09 11:15:48.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:15:49 compute-0 nova_compute[189493]: 2025-12-09 11:15:49.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:15:49 compute-0 podman[252190]: 2025-12-09 11:15:49.963998448 +0000 UTC m=+0.100019184 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vendor=Red Hat, Inc., name=ubi9, com.redhat.component=ubi9-container, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=)
Dec 09 11:15:49 compute-0 podman[252191]: 2025-12-09 11:15:49.973100086 +0000 UTC m=+0.097467607 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec 09 11:15:50 compute-0 nova_compute[189493]: 2025-12-09 11:15:50.240 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:15:50 compute-0 nova_compute[189493]: 2025-12-09 11:15:50.733 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:51 compute-0 podman[252230]: 2025-12-09 11:15:51.964935292 +0000 UTC m=+0.102594271 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true)
Dec 09 11:15:52 compute-0 podman[252231]: 2025-12-09 11:15:52.000410788 +0000 UTC m=+0.132562083 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=edpm, managed_by=edpm_ansible)
Dec 09 11:15:52 compute-0 nova_compute[189493]: 2025-12-09 11:15:52.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:15:53 compute-0 nova_compute[189493]: 2025-12-09 11:15:53.000 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:55 compute-0 nova_compute[189493]: 2025-12-09 11:15:55.737 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:55 compute-0 nova_compute[189493]: 2025-12-09 11:15:55.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:15:55 compute-0 nova_compute[189493]: 2025-12-09 11:15:55.875 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:15:55 compute-0 nova_compute[189493]: 2025-12-09 11:15:55.876 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:15:55 compute-0 nova_compute[189493]: 2025-12-09 11:15:55.876 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:15:55 compute-0 nova_compute[189493]: 2025-12-09 11:15:55.877 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 09 11:15:56 compute-0 nova_compute[189493]: 2025-12-09 11:15:56.355 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 09 11:15:56 compute-0 nova_compute[189493]: 2025-12-09 11:15:56.357 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5384MB free_disk=72.17571640014648GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 09 11:15:56 compute-0 nova_compute[189493]: 2025-12-09 11:15:56.357 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:15:56 compute-0 nova_compute[189493]: 2025-12-09 11:15:56.358 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:15:56 compute-0 nova_compute[189493]: 2025-12-09 11:15:56.821 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 09 11:15:56 compute-0 nova_compute[189493]: 2025-12-09 11:15:56.822 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 09 11:15:56 compute-0 nova_compute[189493]: 2025-12-09 11:15:56.915 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing inventories for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 09 11:15:57 compute-0 nova_compute[189493]: 2025-12-09 11:15:57.076 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating ProviderTree inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 09 11:15:57 compute-0 nova_compute[189493]: 2025-12-09 11:15:57.077 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 09 11:15:57 compute-0 nova_compute[189493]: 2025-12-09 11:15:57.099 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing aggregate associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 09 11:15:57 compute-0 nova_compute[189493]: 2025-12-09 11:15:57.145 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing trait associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX2,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 09 11:15:57 compute-0 nova_compute[189493]: 2025-12-09 11:15:57.180 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 09 11:15:57 compute-0 nova_compute[189493]: 2025-12-09 11:15:57.204 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 09 11:15:57 compute-0 nova_compute[189493]: 2025-12-09 11:15:57.207 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 09 11:15:57 compute-0 nova_compute[189493]: 2025-12-09 11:15:57.208 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:15:58 compute-0 nova_compute[189493]: 2025-12-09 11:15:58.004 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:15:59 compute-0 nova_compute[189493]: 2025-12-09 11:15:59.209 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:15:59 compute-0 nova_compute[189493]: 2025-12-09 11:15:59.210 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 09 11:15:59 compute-0 nova_compute[189493]: 2025-12-09 11:15:59.211 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 09 11:15:59 compute-0 podman[203687]: time="2025-12-09T11:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 09 11:15:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec 09 11:15:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4334 "" "Go-http-client/1.1"
Dec 09 11:15:59 compute-0 nova_compute[189493]: 2025-12-09 11:15:59.782 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 09 11:15:59 compute-0 nova_compute[189493]: 2025-12-09 11:15:59.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 09 11:15:59 compute-0 nova_compute[189493]: 2025-12-09 11:15:59.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 09 11:15:59 compute-0 podman[252268]: 2025-12-09 11:15:59.970151356 +0000 UTC m=+0.102501529 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 09 11:15:59 compute-0 podman[252267]: 2025-12-09 11:15:59.979331566 +0000 UTC m=+0.116795842 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, maintainer=Red Hat, Inc., config_id=edpm, release=1755695350, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Dec 09 11:16:00 compute-0 podman[252269]: 2025-12-09 11:16:00.026910419 +0000 UTC m=+0.153599104 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 09 11:16:00 compute-0 nova_compute[189493]: 2025-12-09 11:16:00.739 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:16:01 compute-0 openstack_network_exporter[205823]: ERROR   11:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 09 11:16:01 compute-0 openstack_network_exporter[205823]: ERROR   11:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:16:01 compute-0 openstack_network_exporter[205823]: ERROR   11:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 09 11:16:01 compute-0 openstack_network_exporter[205823]: ERROR   11:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 09 11:16:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:16:01 compute-0 openstack_network_exporter[205823]: ERROR   11:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 09 11:16:01 compute-0 openstack_network_exporter[205823]: 
Dec 09 11:16:03 compute-0 nova_compute[189493]: 2025-12-09 11:16:03.006 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:16:04 compute-0 sshd-session[252334]: Accepted publickey for zuul from 192.168.122.10 port 33944 ssh2: ECDSA SHA256:jrNFCe4AIo1ZJHDqosVYE7wuhhJFJFul9Io6WGyG4o0
Dec 09 11:16:04 compute-0 systemd-logind[806]: New session 32 of user zuul.
Dec 09 11:16:04 compute-0 systemd[1]: Started Session 32 of User zuul.
Dec 09 11:16:05 compute-0 sshd-session[252334]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 09 11:16:05 compute-0 sudo[252338]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 09 11:16:05 compute-0 sudo[252338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 09 11:16:05 compute-0 nova_compute[189493]: 2025-12-09 11:16:05.741 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:16:06 compute-0 sshd-session[252372]: Invalid user dspace from 159.223.8.217 port 51676
Dec 09 11:16:06 compute-0 sshd-session[252372]: Connection closed by invalid user dspace 159.223.8.217 port 51676 [preauth]
Dec 09 11:16:08 compute-0 nova_compute[189493]: 2025-12-09 11:16:08.008 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:16:10 compute-0 ovs-vsctl[252509]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 09 11:16:10 compute-0 nova_compute[189493]: 2025-12-09 11:16:10.744 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:16:11 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 252362 (sos)
Dec 09 11:16:11 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 09 11:16:11 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 09 11:16:11 compute-0 podman[252556]: 2025-12-09 11:16:11.78601611 +0000 UTC m=+0.137629096 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 09 11:16:12 compute-0 virtqemud[189118]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 09 11:16:12 compute-0 virtqemud[189118]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 09 11:16:12 compute-0 virtqemud[189118]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 09 11:16:13 compute-0 nova_compute[189493]: 2025-12-09 11:16:13.010 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:16:13 compute-0 crontab[252953]: (root) LIST (root)
Dec 09 11:16:15 compute-0 nova_compute[189493]: 2025-12-09 11:16:15.747 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:16:16 compute-0 systemd[1]: Starting Hostname Service...
Dec 09 11:16:16 compute-0 systemd[1]: Started Hostname Service.
Dec 09 11:16:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:16:17.016 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 09 11:16:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:16:17.016 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 09 11:16:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:16:17.017 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 09 11:16:17 compute-0 podman[253142]: 2025-12-09 11:16:17.528085691 +0000 UTC m=+0.090137125 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 09 11:16:18 compute-0 nova_compute[189493]: 2025-12-09 11:16:18.012 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:16:20 compute-0 nova_compute[189493]: 2025-12-09 11:16:20.751 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 09 11:16:20 compute-0 podman[253517]: 2025-12-09 11:16:20.922652864 +0000 UTC m=+0.076237444 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 09 11:16:20 compute-0 podman[253516]: 2025-12-09 11:16:20.925144799 +0000 UTC m=+0.080417162 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., io.openshift.expose-services=, name=ubi9, vcs-type=git, distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
